+41 544 36 44
  • en
  • de
  • Addressing AI concerns with a realistic mindset

    Ella empowers the users of its AI tools

    Hardly a day goes by without AI being discussed (in the media). However, the current discourse tends to be focused on fear-mongering depiction of AI’s risks. While it is essential to acknowledge potential challenges, we must resist succumbing to fear and instead embrace a balanced and realistic perspective on AI’s societal impacts.

    Throughout history, every social upheaval has encompassed risks as well as potential. Technological change through AI is no exception. It is important to approach AI challenges proactively and responsibly, but not with fear. Fear can hinder progress and innovation, limiting us from fully harnessing the transformative power of AI for the betterment of society.

    Education and transparency play a key role here. Only by promoting knowledge about AI can AI’s societal impacts be better understood and AI challenges overcome. This will empower individuals and corporations to make informed decisions about AI, enabling them to use AI tools responsibly and ethically.

    Transparency in AI development and deployment is equally vital, fostering trust and cooperation between developers, users, and society at large. It ensures that AI systems are held accountable and operate in a manner aligned with human values and preferences.

    The enormous potential of AI should be harnessed to enhance human capabilities. To do so, humans must be placed at the center of AI development. Ella exemplifies this commitment by focusing on ethical design and transparency in the development of our AI software.

    We believe that empowering users with AI technology should not come at the expense of privacy, autonomy, or fairness. Instead, AI should be a collaborative partner and augment human skills. Ella’s goal is to empower users to extend their human capabilities through our AI tools.

     

    Artificial intelligence, language and the question of social responsibility

    Author: Michael Keusgen, CEO, Ella Media AG

    Systems based on artificial intelligence (AI) generally aim to support people in their actions. In doing so, AI systems open up new possibilities for minimizing errors, making processes more efficient and dealing with existing problems. In this context, the development of artificial intelligence has the potential to increasingly influence the decisions of humans. It is therefore important to handle learning systems responsibly.

     

    AI imitates language without understanding the content

    AI itself is value-free; it captures, analyzes and processes language based solely on the data with which it has been trained. In doing so, AI reproduces existing patterns without actually understanding the content of its own texts. This can have undesirable consequences, such as reproducing a stereotypical representation of certain groups of people.

    A well-known and well-researched example is the discrimination of women in AI-supported job application processes – for example, because the data basis originates from a time when almost exclusively men worked in the specific occupational field.
     

    Those who develop speech AI bear great responsibility

    Speech-based AI systems are only as good as the texts they learn from. For this reason, system operators and developers need to consistently make sure to identify biases and the resulting discrimination (bias) caused by errors in data collection and selection or in the processing of data based on algorithms long before an AI is used for the first time. This requires a value orientation and a high level of criticality and learning ability on the part of the companies and the people behind them.

    For companies, however, responsible use of speech AI also means keeping an eye on possible misuse of the technology as early as the development stage – for example, through automatically generated texts with false or intentionally misleading content. AI systems can be targeted to create this distorted information in bulk and for specific audiences – including to influence public discourse.

    Wise and targeted selection of data sources also prevents misuse. An editorial AI system like Carla, based solely on data from major news outlets, offers no potential for abuse in disinformation campaigns.

     

    Speech AI opens up new ways in the battle against disinformation

    Corporate social responsibility also includes the question of how AI can be used to benefit people. When it comes to recognizing unintentional errors, plagiarism, and even targeted fake news, artificial intelligence is often already streets ahead of humans. The first applications are already in use. For example, special AI systems, fed in real time with texts and other social media data, can help combat bots.

    Posted on September 8, 2021

    Our optimized strategy for customized AI solutions

    You can now request customized AI solutions on Ella’s brand news solutions page.

    In a rapidly evolving generative AI market, we have fine-tuned our strategy to ensure we are not only keeping pace but setting the pace. 

    We now not only provide a diverse array of specialized AI services but also engage in deep collaboration with our esteemed clients. This two-pronged approach has ushered in a new era of opportunity, enabling us to craft tailored AI solutions that seamlessly integrate into and realize our clients’ unique visions and goals. 

    As evidence of this strategic shift, we are excited to unveil our brand new “Solutions” webpage – a digital gateway where you can explore our comprehensive spectrum of AI solutions and submit your requests for customized AI solutions. 


    Learn more

    German skepticism about generative AI

    Germany views generative AI as risk according to study “Digitization of the Economy” conducted by industry association Bitkom

    While investments in artificial intelligence are increasing in the rest of the world, a disapproving attitude toward AI can be identified in Germany. Particularly with regard to generative AI, decision-makers in German companies are still not very impressed by ChatGPT and Co. This was the finding of a representative survey* by the industry association Bitkom.

    The majority of respondents do not believe that these AI applications can close skill gaps or enable new business models. Most respondents view AI more as a risk, especially with regard to staff reductions. 

    On the one hand, we criticize the cautious approach of German investors and companies, because this skepticism leads to the country’s limited participation in the global AI revolution and international AI competition.

    To fully utilize the transformative power of AI, it is crucial for Germany to quickly strike a balance between caution and innovation. This is the only way to harness the potential of AI while overcoming the associated challenges.

    But on the other hand, we understand how important it is to address concerns about a lack of potential and regarding the possible displacement of jobs.

    To counter this skepticism about generative AI, our general approach and software development focuses on societal well-being and prioritizes human action.

     

    *Please note that the Bitkom study is only available in German.

     

    Using data for training AI models

    Data security and protection at Ella

    Using personal data for AI training is an emerging trend. The company X (formerly Twitter) has revised its privacy policy and starting September 29, 2023 X will use collected personal data to train its AI models. Some companies, such as Zoom and Microsoft, have already confirmed or plan to use data for AI training. However, such practices are met with resistance and petitions. 

    The EU is currently negotiating rules for the use of these  data by AI. It is suggested that companies obtain explicit permission from data owners before using their personal data for AI training.

    We welcome the EU’s intention to regulate AI-based data use, as responsible data handling is particularly important at Ella. This regulation will further reinforce our existing data protection principles, which are as follows: 

    • Secure storage of data
    • Transparent communication about data use and its purposes
    • Data management in accordance with applicable law

     

    The Hollywood strike

    The importance of responsible AI that respects copyrights

    The use of generative AI in Hollywood has become one of the significant concerns within current strikes and demands for regulations. Actors, represented by the Screen Actors Guild of America (SAG-AFTRA), fear that AI poses a threat to their creative professions and could exploit their identities and talent without consent or proper compensation. They have joined the Writer’s Guild of America (WGA) in demanding explicit AI regulations to protect writers and their works.

    The concern stems from the increasing effectiveness and prevalence of AI tools that can create images and text, replicate faces and voices, and generate human-like writing. While these tools often have limitations and produce derivative or inaccurate content, entertainers and other creative professionals worry that without strict regulation, their work could be replicated and remixed by AI, reducing their control and ability to earn a living.

    Ella supports the striking actors’ and writers’ call for regulations that safeguard their interests and ensure fair compensation for their work in the face of advancing AI technologies. We believe that an open dialogue and collaboration between stakeholders is essential to responsibly realize the potential of AI.

    At Ella, human action and copyrights are prioritized. That’s why we’ve always been committed to developing responsible AI technologies that support content creators in the creative process while respecting the intellectual property of content creators.

    Our policy is and will continue to be to develop AI tools that meet the highest ethical criteria and protect the rights and interests of our users.

     

    How to overcome challenges for AI in industry and SMEs

    Our principles

    As part of a webinar-series on AI education for employees*, the Platform Industrie 4.0, led by the German Federal Ministry of Economics and Climate Protection and the German Federal Ministry of Education and Research, addresses the potential as well as the challenges of AI for industry and SMEs. 

    On the one hand, AI can help to reduce the workload of personnel, increase quality and counteract the shortage of skilled workers. On the other hand, however, the challenges, such as a lack of AI skills among employees, a low digital maturity level of companies or data protection concerns, should not be overlooked.

    At Ella, we strongly believe that these challenges can only be effectively addressed through transparency and education, thereby enabling the great opportunities AI offers. As a provider of AI writing assistants, we are committed to the following principles:

    • We want to proactively demystify this technology so that our customers can make informed decisions.
    • We strive to provide our customers with a clear understanding of how our AI tools and data processing works. 
    • We also support our customers every step of the way – from initial implementation to ongoing use – ensuring a consistently smooth and successful integration of our AI tools.

     

    *Please note that the recorded webinar is only available in German.

     

    Watch webinar now

    ChatGPT: Pioneering but not yet fully developed

    ChatGPT has attracted much attention in recent months. Indeed, ChatGPT’s performance is impressive – and together with solutions like Midjourney or Lensa, it has made the general public aware of AI as a content creation tool.

    ChatGPT impressively demonstrates how human-centric design and low-threshold access can get people to open up to the use of AI – and what it can make possible. Emails, applications, essays, program code, word games: There are almost no limits to the versatility of ChatGPT.

    💡 Know what you’re dealing with

    Behind ChatGPT is a language model that outshines everything known so far. But it is important to bear in mind that it is designed to be a generalist: ChatGPT’s text completion relies on a wide range of content in its training data. We must therefore be aware that we are not dealing with quality-assured data in ChatGPT. The model aggregates speech-aware output from probabilities and patterns present in its comprehensive training data. It does not, however, guarantee factual accuracy, and so, in turn, users do not know whether a piece of information was “invented” or has a reliable source. Human expertise is needed to identify possible misstatements – and one strand of the current discourse around ChatGPT centres on the urgency of developing literacy and skills in this space.

    ➡ As with all tools, users must learn to use ChatGPT and be aware of its weaknesses.

    Responsible AI practices are essential at Ella

    The EU’s proposed AI Act reinforces our existing quality and compliance commitments.

    Policy makers are finally focusing on quality and compliance. At Ella, we have been doing this for 5 years: Our AI tools should meet the highest ethical criteria and uphold the rights and interests of our users. Therefore, we have proactively implemented robust measures in the development of our AI products. These measures include the deliberate data selection and cleansing, as well as our fact-checking and correction system.

    The proposed EU’s AI Act reinforces our existing commitments. The AI Act encourages the adoption of responsible AI practices which we have been championing from the start. We appreciate the EU’s intent to establish a regulatory framework for artificial intelligence. Because when it comes to implementing responsible AI practices, we see a need for improvement – particularly in the SME sector.

    Furthermore, we believe a balance between fostering innovation and ensuring the responsible use of AI is crucial. We recognize the need for regulatory guidelines and through them, we expect competition to be fair and innovative, based on fundamental democratic rights.

    We will be closely monitoring the development and evolution of the AI Act, ensuring that our AI tools align with the highest quality standards and comply with the regulations that will ultimately be enacted. Our goal is to anticipate and comply with future regulations.