Skip to main content

Posts

Showing posts with the label generativeAI

Gen AI: Tip #2 (Control How ChatGPT Responds with This Simple Prompting Trick)

  Tired of AI giving too much or too little info? Here’s something that can help. Use this prompt at the start of your conversation: “If I add * at the end of my question, please provide a concise, to-the-point response. If I add **, provide a full and comprehensive response. If I do not provide any symbols, please provide a standard response.” Now, guide ChatGPT’s response style like this: 🔹 Add * → Short and crisp 🔹 Add ** → Deep and detailed 🔹 No symbol → Balanced by default ✅ Why it works: You stay in control of the depth No need to rewrite your prompt every time It works across any use case — writing, planning, learning, and ideating Small tweak. Huge flexibility. Try it and see the difference. 🚀  

GenAI Tip #1: Improve Prompt Results with This Simple Instruction

When using GenAI tools like ChatGPT for test case generation, reviewing requirements, or analyzing user stories, we often need to provide context in chunks. 📌 Start your conversation with this prompt: “ I will be sending you several pieces of information in multiple messages. For each one, your only job is to acknowledge that you’ve received it with a simple message like “Acknowledged”—nothing more. Please do not take any action or provide any analysis or output until I send a final message with the instruction: “Now proceed.” Only then should you act on the information shared. “ 🛑 Why this works: -> It stops the model from responding after every input -> Ensures the model waits until you’ve shared all details -> Prevents premature or incomplete answers -> Mimics a real approach: gather context first, then act with precision It helps the model listen first, then act — just like a good teammate would. 💡 Whether you’re feeding in test data, requirement docs, or bug logs — ...

Stop Overengineering: Why Test IDs Beat AI-Powered Locator Intelligence for UI Automation

  We have all read the blogs. We have all seen the charts showing how Generative AI can "revolutionize" test automation by magically resolving locators, self-healing broken selectors, and interpreting UI changes on the fly. There are many articles that paints a compelling picture of a future where tests maintain themselves. Cool story. But let’s take a step back. Why are we bending over backward to make tests smart enough to deal with ever-changing DOMs when there's a simpler, far more sustainable answer staring us in the face? -             Just use Test IDs. That’s it. That’s the post. But since blogs are supposed to be more than one sentence, let’s unpack this a bit. 1. Test IDs Never Lie (or Change) Good automation is about reliability and stability. Test IDs—like data-testid ="submit-button"—are predictable. They don’t break when a developer changes the CSS class, updates the layout, or renames an element. You know...

Custom Instructions in ChatGPT

  Custom Instructions in ChatGPT, also known as   Tailored Preferences   are the best way to control ChatGPT’s output easily. Example 1: Like, if you are drained of typing “Give a brief response” or “Please respond in length” like me? Try following this instruction. If I add * at the end of my question, please provide a concise, to-the-point response. If I add **, provide a full and comprehensive response. If I do not provide any symbols, please provide a standard response. Example 2: We can control the model’s temperature using APIs, but there is no way to do it through the UI. You can use the following instruction to have GPT follow temperature. If I specify a temperature range of 0 to 1 at the conclusion of the question, please react accordingly: Temperature 0 indicates highly deterministic responses, thus you should always run a web search before responding. Temperature 1 gives for greater creativity and freedom in reaction.

Glossary of Key Generative AI Terms

  1. Generative AI A subset of artificial intelligence focused on creating new content, such as text, images, audio, video, or code. It relies on models trained on vast datasets to identify patterns and generate similar but unique outputs. 2. Large Language Model (LLM) An AI model trained to process and generate human-like text. Examples include GPT (Generative Pre-trained Transformer), BERT, and LaMDA, all of which leverage deep learning architectures, specifically transformers. 3. Transformer A neural network architecture known for its ability to process sequential data, like text. Transformers use self-attention mechanisms, enabling the model to learn relationships between words and capture long-range dependencies efficiently. 4. Retrieval-Augmented Generation (RAG) A hybrid approach combining retrieval-based and generative AI techniques. RAG models retrieve relevant information from external sources (e.g., databases or documents) and then use this information to generate contex...

OpenAI Learning:- Chapter 6

  Using Generative AI for Audio/Video Processing: Power of Summarization What Is the Purpose of This Application ? This application is for audio and video summarization. For users who wish to quickly create bullet point summaries of audio/video content, it is a useful tool. Sample code: from langchain.document_loaders import youtube from langchain.text_splitter import RecursiveCharacterTextSplitter import openai import streamlit as st openai.api_key = "<<Add your key here>>" st.set_page_config(page_title= "YouTube Audio/Video Summariser App" ) st.markdown( """<p style="color: #3fd100;font-size: 30px;font-family: sans-serif; text-align:center;margin-bottom:0px;"><b>YouTube Audio/Video </b><span style="color: #3fd100;font-size: 30px;font-family: sans-serif;"><b>Summariser App</b></span></p><p></p>""" , unsafe_allow_html= True ) st.head...

OpenAI Learning:- Chapter 5

  Generative AI-Powered Audio/Video Processing: Whisper's Python Adventure What Is the Purpose of This Application? An application that displays the ability of Generative AI to process and analyze audio/video files, and then output the lyrics for that audio or video file. Sample code: import streamlit as st from pytube import YouTube import os import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline def get_mp3(url):     yt = YouTube(str(url))     audio = yt.streams.filter(only_audio = True ).first()     destination = '.'     out_file = audio.download(output_path=destination)     base, ext = os.path.splitext(out_file)     new_file = base + '.mp3'     os.rename(out_file, new_file)     return new_file def get_transcript(audio_file):     device = "cuda:0" if torch.cuda.is_available() else "cpu"   #If you have GPU else it will use cpu  ...