At HiQ, we specialize in developing AI solutions that align with real business objectives.

How to automate repetitive tasks in Jira with LLMs

Increasing efficiency with AI

By combining LLMs and Jira Automation, you can not only save time, but also increase the efficiency and accuracy of your business processes. We explain how it works.

In many modern companies, ticket systems are essential for organizing and tracking work processes. Nevertheless, inquiries, problems or tasks often still reach the relevant departments by email.

Problem: Resource consumption due to repetitive classification work

As soon as the relevant emails are received by the responsible persons, they are usually processed and classified manually before they can be handled in ticket systems such as Jira from Atlassian. This process ties up valuable personnel resources that could be used more efficiently, for example for strategic work.

An understanding of the context is required to categorize the issues and fill the fields in the ticket system appropriately. Until now, there were no technical options for automating these workflows. Thanks to the further development of artificial intelligence (AI), these use cases can now be looked at in a new light.

Solution: Automation with Jira Automation in conjunction with LLMs

Large Language Models (LLMs) are a special type of AI that are trained to understand, process and generate human language. The ability to recognize patterns and relationships in a context that makes these models so exciting for the scenario described: With the integration of LLMs in Jira Automation, recurring tasks can not only be automated, but also made more intelligent.

One exciting use case is the processing of incoming security updates. Thanks to the power of LLMs, the context of these updates is automatically recognized and relevant custom fields in Jira are filled accordingly.

Imagine the IT security team regularly receives security-critical information about the systems used in the company. Until now, the workflow for processing these messages in Jira tickets has been as follows: The messages arrive in Jira via interfaces / the connection of an email inbox, the messages are then classified by a 1st-level team and the information is finally transferred manually to the corresponding fields within Jira.

By using an LLM in conjunction with Jira Automation, it is possible to automatically recognize which systems are affected and check whether they are available in the company. In addition, prioritizations can be made and important information such as the affected group can be derived from the context. Automated assignment to the relevant teams is also possible.

This solution significantly reduces manual effort and ensures that no relevant information is overlooked.

Digression: Data protection and security with locally hosted LLMs

A key concern for many companies in connection with the use of AI is data protection. Particularly in industries that work with sensitive data, such as healthcare, the financial sector or security-critical IT systems, there are often concerns about passing on sensitive information to external, cloud-based AI systems located in the US. Concerns about compliance with strict data protection guidelines such as the GDPR are justified, and many companies therefore decide against using AI in order to avoid potential data breaches.

This is where the use of locally hosted LLMs offers an ideal solution. By implementing such models on-premise or in data centers within the EU, companies can retain full control over their data and ensure that it is not transferred to external service providers or countries outside Europe. European cloud providers and local data centers make it possible to keep the entire data flow and processing within the EU, which allows the strict requirements of the GDPR to be met.

Another advantage of locally hosted LLMs is that they can be customized to a company’s specific requirements. Sensitive data such as email content, customer information or security-critical messages can be processed securely without leaving the company’s protected IT environment. In addition, regular security updates and internal audits allow security gaps to be proactively identified and closed.

For companies looking for a security-compliant and privacy-friendly AI solution, the combination of Jira Automation with locally hosted LLMs offers the perfect approach to reap the benefits of automation without compromising on data protection. We will be happy to help you select and use a local LLM that is suitable for your company.

Added value: Increased efficiency thanks to intelligent automation

By using LLMs and Jira Automation, companies can not only save time and resources, but also ensure the accuracy and consistency of processing. Important aspects are reliably recognized and the relevant fields in Jira are automatically completed. This allows the team to concentrate on the really important tasks, while repetitive work is covered intelligently and efficiently by the system.

The solution offers the following advantages in particular:

  • Automated classification of incoming e-mails
  • Reading out context information & checking using assets
  • Employees can focus on other core tasks or problem solving instead of wasting a lot of time on the manual work of sifting through incoming mails
  • Faster detection of security gaps and therefore less risk of failure

Are you interested in using AI in Jira / Jira Service Management to make your processes even more efficient?

As ESM experts and Atlassian Platinum Solution Partner, we can answer your questions. 

More information!

Outlook: The future of AI-supported automation

The integration of LLMs in Jira Automation offers enormous potential for holistic service provision as part of enterprise service management with reduced additional effort. In the future, the use of AI could be further optimized by operating LLMs locally to better protect sensitive data. 

In addition, integrations with other systems, such as the Frends automation platform, can also be implemented to enable even more seamless workflow management. The use of Atlassian Forge for deeper integration, for example to include attachments in incoming emails, also opens up exciting possibilities.

Conclusion: A step into the future of automation

The combination of AI and the automation options in applications such as Jira opens up completely new ways for companies to optimize processes and free up valuable resources.

Contact

Region

Would you like to know more? Then get in touch.

At HiQ, we specialize in developing AI solutions that align with real business objectives.

A Game-Changer in Open Source AI

Meta has just dropped a bombshell in the AI world with the release of Llama 3.1, and it’s a big deal. Here’s why we are so excited.

The 405B Powerhouse

Llama 3.1 405B is a groundbreaking model that stands out as the first open-weights AI capable of rivaling the performance of closed-source giants like GPT-4 and Claude 3.5 Sonnet. This development significantly narrows the gap between open and closed models, democratizing access to cutting-edge AI capabilities. The open-weights nature of Llama 3.1 allows the community to fine-tune and adapt the model, potentially unleashing a wave of specialized, high-performance models tailored to various needs.

Accessibility for all

The Llama 3.1 8B model represents a major leap forward for consumer-grade hardware. It outperforms GPT-3.5 on many benchmarks while being able to run locally and at no cost. This advancement places recent state-of-the-art performance in the hands of individual developers and researchers, empowering them to innovate without the need for expensive infrastructure.

Key Improvements

Llama 3.1 comes with several significant enhancements:

  • 128K context length across all models: This allows for better handling of longer inputs, enabling more complex tasks and extended conversations.
  • Multilingual support for eight languages: This broadens the model’s usability across different linguistic contexts, making it more versatile and inclusive.
  • Enhanced reasoning and tool use capabilities: These improvements make the model more adept at logical reasoning and utilizing external tools effectively.
  • Improved instruction-following and chat performance: The model now better understands and executes instructions, providing more accurate and coherent responses in chat applications.

What this means for the future

The release of Llama 3.1, particularly the 405B model, marks a significant milestone in open-source AI. It promises to accelerate innovation, enable new applications, and push the boundaries of what’s possible with locally-run models. As this trend continues, we can expect even more powerful and accessible AI tools to emerge in the near future.

Stay tuned as the community begins to explore and build upon these groundbreaking models!

Want to learn more about artificial intelligence and its models?

Then our AI training course is just right for you! You can find more information here.

Contact

Region

Sebastian Kouba

Sebastian drives innovation in our IT department through his generative AI expertise. When not at work, he’s reliving his youth on the beach volleyball court or crafting the ideal cappuccino.

The Future of Machine Learning Operations

Machine learning is transforming businesses, but managing its lifecycle efficiently is a growing challenge. MLOps provides the necessary tools and processes to streamline data handling, model training, deployment, and monitoring. We explain why robust MLOps practices are essential to ensure reliability, scalability, and governance.

Machine Learning (ML) has witnessed explosive growth in recent years. As organizations increasingly leverage Machine Learning models to drive business value, the need for robust Machine Learning Operations (MLOps) practices has become paramount. MLOps encompasses the tools and processes required to manage the entire lifecycle of Machine Learning efficiently, from data acquisition, data processing, and model training to deployment, monitoring, and governance.

In the following, we want to delve into the exciting future of MLOps, exploring emerging trends poised to reshape the technical landscape and the challenges that companies must address to ensure successful deployments of Machine Learning.

What is Machine Learning?
Machine learning is a form of artificial intelligence (AI) that enables computers to learn without explicit programming. By analyzing data and utilizing statistical techniques, machines can recognize patterns and enhance their performance in specific tasks. This technology finds application in various domains, ranging from spam filtering to facial recognition software. It also encompasses the subfield of Deep Learning, which serves as the foundation for the recently developed Large Language Models (LLMs), such as ChatGPT.

Embracing the Trends: A Glimpse into the Future of MLOps

The MLOps landscape constantly evolves, with new technologies and methodologies emerging to address the complexities of managing ML models in production. Here are some key trends that are shaping the future of MLOps:

  • Cloud-Native MLOps: Cloud computing offers a scalable, cost-effective platform for managing ML workloads. Cloud-based MLOps end-to-end platforms streamline the entire ML lifecycle, from data storage and compute resources to model training and deployment. This enables organizations to leverage the cloud’s elasticity to handle fluctuating workloads and experiment with different models efficiently.

    One example of a commercial end-to-end platform in MLOps is Amazon SageMaker, a cloud-based ML platform for developing, training, and providing ML models. Kubeflow also falls into the category of these platforms, but unlike Amazon SageMaker, it is open-source and can be used free of charge.
  • Automated ML Pipelines: Automating repetitive tasks within the ML lifecycle, such as data ingestion, data preprocessing, feature engineering, and model selection, can significantly improve efficiency and reduce human error. Automated ML pipelines leverage tools like AutoML (Automated Machine Learning) to automate various stages of model development, allowing data scientists to focus on more strategic tasks like developing innovative model architectures and identifying novel business use cases.

    Among others, Azure with Azure Automated Machine Learning, Amazon Web Services (AWS) with AWS AutoML Solutions, or Google Cloud Platform (GCP) with AutoML offer services in this area, which minimizes the effort involved in implementing this complex method.
  • Continuous Integration and Continuous Delivery (CI/CD) for Machine Learning: Implementing CI/CD practices in MLOps ensures that changes to models and code are integrated and delivered seamlessly. This fosters a rapid experimentation and iteration culture, enabling organizations to quickly adapt models to changing business needs and data distributions.
  • Model Explainability and Interpretability (XAI): As ML models become more complex, understanding their decision-making processes becomes crucial. XAI techniques help to explain how models arrive at their predictions, fostering trust in model outputs and enabling stakeholders to identify potential biases or fairness issues.

    Tools that can support you in making ML models more explainable and their decisions more transparent are, e.g., Alibi Explain, an open-source Python library that aims at the interpretation and inspection of ML models, or SHapley Additive exPlanations (SHAP), an approach derived from game theory to explain the output of arbitrary ML models.
  • MLOps for Responsible AI: Responsible development and deployment of AI models are critical concerns. MLOps practices that integrate fairness, accountability, transparency, and ethics (Microsoft’s FATE research group, for example, is studying this subject area) principles throughout the ML lifecycle are essential to ensure that models are unbiased, avoid unintended consequences, and comply with regulations.

    An example of such a regulation is the AI Act, a “legal framework on AI, which addresses the risks of AI and positions Europe to play a leading role globally,” recently adopted by the European Parliament. When attempting to design responsible and safe AI, for example, the services of Arthur and Fiddler can be consulted.
  • Integration with DevOps: Aligning MLOps practices with existing DevOps workflows can create a more cohesive development environment. This fosters collaboration between data scientists, ML engineers, and software engineers, leading to a more streamlined and efficient software development lifecycle (SDLC) incorporating machine learning.
  • Importance of Data-centric AI & DataOps: Data is the lifeblood of ML models. DataOps practices that ensure data quality, availability, and security throughout the ML lifecycle are crucial for model performance and overall system reliability. DataOps combines automation, collaboration, and agile practices to improve the speed, reliability, and quality of data flowing through your organization. This approach lets you get insights from your data faster, make data-driven decisions more effectively, and improve the quality and performance of your Machine Learning models based on this data.
  • Focus on Security: As ML models become more ubiquitous, securing them from potential attacks becomes increasingly important. MLOps practices that integrate security considerations throughout the model lifecycle are essential to mitigate risks such as data poisoning, adversarial attacks, and model theft.

Conquering Challenges: Building a Robust MLOps Foundation

While the future of MLOps holds immense promise, several challenges must be addressed to ensure successful ML deployments. Here are some key areas to consider:

  • Standardization and Interoperability:  The lack of standardization across MLOps tools and frameworks can create silos and hinder collaboration. Promoting interoperability between tools and establishing best practices for MLOps workflows is crucial for creating a more unified and efficient ecosystem. A pioneering approach to this problem is the Open Inference Protocol, an industry-wide effort that aims to establish a standardized communication protocol between so-called inference servers (e.g., Seldon MLServer, NVIDIA Triton Inference Server) and orchestrating frameworks such as Seldon Core or KServe.
  • Talent Shortage:  The demand for skilled MLOps professionals outstrips the available supply. According to statista, the number of vacancies for IT specialists in companies in Germany rose to a record high of 149,000 in 2023, and Index Research reports that employers advertised almost 44,000 jobs for AI experts from January to April 2023. Organizations must invest significantly in training programs, talent acquisition strategies, and competitive employee compensation to narrow this gap and establish a strong MLOps team. This process includes recognizing the essential skills and expertise needed for successful MLOps implementation, such as data science, software engineering, cloud computing, and DevOps. Furthermore, it involves forming interdisciplinary teams that encompass these diverse domains.
  • Monitoring and Observability:  Effectively monitoring the performance and health of ML models in production is critical for catching issues early and ensuring model reliability. Developing robust monitoring frameworks and integrating them into MLOps pipelines is essential. Aporia, an ML platform that focuses on the observability of ML models, can be leveraged to achieve these objectives.
  • Model Governance:  Establishing clear governance frameworks for managing the lifecycle of ML models is crucial. This includes defining roles and responsibilities, ensuring model versioning and control, and setting guidelines for model deployment and retirement. The enterprise platforms Domino Data Lab and Dataiku are examples of the solutions and platforms that holistically reflect these features and many others of the ML lifecycle.
  • Explainability and Bias Detection:  As mentioned earlier, ensuring model explainability and detecting potential biases are critical aspects of responsible AI. Organizations must invest in tools and techniques to understand how models arrive at their decisions and identify and mitigate any fairness issues.

Conclusion: Embrace and shape MLOps practices to create added value

The future of MLOps is extremely bright. Organizations can build resilient and efficient operational processes by identifying and evaluating emerging trends, concluding them, and proactively addressing the associated challenges. In doing so, they provide AI models for their customers and create a solid foundation that ensures the models’ scalability, availability, and reliability and fulfills legal requirements. However, the most important added value that arises from this process is the creation of trust in the AI models’ reliability, fairness, and security, strengthening faith and trust in the company.

Contact

Region

Florian Erhard

Florian is a Machine Learning Engineer at HiQ. He loves traveling and immersing himself in foreign cultures, when he isn’t reading about AI & Machine Learning.

HiQ offers a wide range of Atlassian services and is a successful Atlassian Platinum Solution Partner.

More Power for Atlassian Products

Atlassian Intelligence brings AI-powered automation directly into your Atlassian tools. From smart ticket classification to automated responses – the new features increase efficiency and productivity. We explain how you get the most out of Atlassian Intelligence for your team and yourself.

Artificial intelligence (=AI) is one of THE current buzzwords and influences almost every area of our everyday lives. As a branch of IT and due to the digitalization in recent years, AI has developed into a very lively product.

This rapid progress seems to be just gaining momentum and is already influencing us in so many areas – from the search for the fastest route to the next show recommendation on a streaming platform to existing smart home devices. AI algorithms are making things easier for us everywhere. This also means that more and more companies and start-ups are specializing in the technology or are at least developing further in this area.

This also applies to Atlassian, the Australian provider of software solutions for software developers with its Atlassian Intelligence (also called AI).

What is Atlassian Intelligence?

The collection of AI-powered functions is offered now for many Atlassian products and supports the performance of popular cloud products such as Jira, Confluence or Trello. In a sense, AI acts as virtual teammate, helping teams increase their productivity and work together more efficiently. Based on OpenAI, Atlassian makes it possible to improve the performance of their tools and offers users a seamless working environment.

Some benefits of Atlassian Intelligence

  • adaption to Atlassian solutions
  • integration into Atlassian programs
  • based on Natural Language Processing (NLP)
  • across all areas – from search to navigation to content creation and task execution

Why is it worth using Atlassian Intelligence?

  • save time and resources
  • ease of work
  • easier creation of strategies
  • automatically optimized reports
  • improving communication
  • channels facilitated content creation
  • smart search function

Below we would like to show you a few examples of how AI can already be used. More features and functions will follow in the near future and AI will also be activated in other tools, such as Trello, Bitbucket or Atlassian Analytics.

Example: Confluence

Features:

  • create and transform content
    • improve writing style
    • suggest titles
    • improve grammar
    • shorten text
    • brainstorming
  • summarize content
  • create automations
  • define terms

Example: Jira Service Management

Features:

  • create and transform content
  • summarize content
  • suggest request types
  • answers to customer inquiries

Example: Jira Software

Features:

  • create and transform content
  • search for issues

Starting May 6th, Atlassian Intelligence will be automatically activated for all Premium and Enterprise plan products.

Click here for the Atlassian calculator.

Don’t have Premium yet? Then feel free to contact us.

As Atlassian Platinum Solution Partner we can offer, among other things, discounted access!

Contact

Region

Would you like to know more? Then get in touch!