Skip to content

submission-meraki #14

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions meraki/Datasets/case-123.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@

Case Title: The Enigmatic Inheritance

Case Number: 45654

Parties:
Plaintiff - Olivia Thompson
Defendant - Robert Harrington

Facts:
Olivia Thompson, a diligent legal practitioner, filed a lawsuit against Robert Harrington, alleging that she was wrongfully excluded from inheriting a substantial estate left by her late grandfather, Victor Thompson. The estate, valued at over $10 million, included real estate, investments, and valuable art collections.

Background:
Victor Thompson, a wealthy businessman, passed away under mysterious circumstances. Olivia Thompson, his only living relative, was surprised to learn that she had been omitted from her grandfather's will entirely. Instead, the entire estate was bequeathed to Robert Harrington, a distant cousin with whom Victor had limited contact.

Allegations:
Olivia Thompson claimed that the will was executed under suspicious circumstances, arguing that her grandfather was coerced or unduly influenced to disinherit her. She contended that Robert Harrington, aware of the estate's value, manipulated Victor into altering the will in his favor.

Legal Arguments:
Olivia's legal team argued that the sudden exclusion of the only direct heir raised significant questions about the testamentary capacity of Victor Thompson at the time of drafting the will. They presented evidence suggesting that Victor was in poor health and susceptible to external pressures during the period leading up to the execution of the contested will.

Furthermore, Olivia's legal team sought to demonstrate a pattern of manipulation and undue influence by highlighting Robert Harrington's financial troubles at the time and his potential motive to secure the inheritance.

In response, Robert Harrington's defense team maintained that Victor Thompson was of sound mind when he executed the will, and the decision to exclude Olivia was deliberate and well-considered. They argued that there was no evidence of coercion or undue influence and that Olivia's absence from the will was Victor's independent choice.

Discovery:
During the discovery phase, both parties presented financial records, medical reports, and witness statements. Olivia's team unveiled a series of letters and emails suggesting a strained relationship between Robert and Victor, indicating potential motives for manipulation.

The defense team countered with testimonials from individuals close to Victor, asserting that he had expressed dissatisfaction with Olivia's lifestyle choices and considered Robert a more responsible heir.

Outcome:
As the trial unfolded, the court grappled with the complexities of testamentary capacity and the burden of proof regarding undue influence. The case attracted media attention due to the high-stakes nature of the estate and the mysterious circumstances surrounding Victor's death.

Ultimately, the court decided in favor of Olivia Thompson, ruling that the evidence presented by her legal team established a reasonable doubt about the validity of the contested will. The court ordered a reassessment of the estate distribution, taking into account Olivia's rightful share.

The case highlighted the importance of safeguarding the integrity of wills and demonstrated the necessity of thorough legal scrutiny in matters of inheritance, particularly when familial relationships are strained and significant assets are at stake.
69 changes: 69 additions & 0 deletions meraki/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
#### Team Name - Meraki
#### Problem Statement - Generative AI Large Language Models Fine Tuned For Legal Practice Platform
#### Team Leader Email - [email protected]

## A Brief of the Prototype:
![diagram](https://github.com/prajnac20/oneAPI-GenAI-Hackathon-2023/assets/47785645/4ca86a1a-ad1b-4369-9141-e38bdb84d226)

Our Chatbot is a cutting-edge tool designed to empower legal practitioners by providing in-depth analysis and comprehensive insights into legal cases. Tailored specifically for the legal profession, this chatbot offers a seamless and efficient way for legal professionals to navigate through complex cases, aiding them in making informed decisions and formulating effective strategies.

### Key Features:

#### Case Summarization:
Chatbot excels in summarizing extensive case materials, extracting key details, and presenting a concise overview. This feature enables legal practitioners to quickly grasp the essential elements of a case, saving valuable time in the research process.
#### Legal Research Assistance:
The chatbot integrates advanced legal research capabilities, allowing practitioners to access relevant statutes, case law, and legal precedents. By leveraging natural language processing, Chatbot simplifies the research process, making it more accessible for legal professionals.
#### Analysis of Legal Documents:
Legal documents can be overwhelming, but the chatbot excels in analyzing and interpreting legal language. It identifies critical clauses, potential risks, and noteworthy arguments within documents, enabling practitioners to delve into the nuances of each case effortlessly.
#### Customized Case Strategy Recommendations:
Based on the information provided and the intricacies of a case, the chatbot generates personalized recommendations for legal strategies. It takes into account relevant legal precedents, current regulations, and the unique aspects of each case, assisting practitioners in developing robust and tailored legal approaches.
#### Interactive Q&A Sessions:
This Chatbot engages in interactive question-and-answer sessions, allowing practitioners to seek clarification on specific legal points, precedents, or procedural matters. This interactive feature promotes a dynamic exchange of information and enhances the depth of understanding.

#### Collaborative Case Management:
The chatbot seamlessly integrates with case management systems, facilitating collaboration among legal teams. It streamlines communication, document sharing, and task allocation, fostering an efficient and organized approach to case handling.

#### Tech Stack:

Python frameworks - Langchain and Haystack.
In the current version the usage of langchain is done do the candidate document retrieval from the document set using FAISS, that is to be replaced Intel OneAPI AI Analytics.

### Step-by-Step Code Execution Instructions:
Step 1: Open my_pipeline.py
Step 2: In PromptNode provide path for the llm or provide the API Key.
Step 3: run the command streamlit run bot.py

### Future Scope:

The future scope of your bot can be broad and depends on your specific goals, the domain it operates in, and the evolving needs of users. Here are some potential directions for expanding the capabilities and enhancing the functionality of the bot:

#### Natural Language Processing (NLP) Enhancements:
Invest in improving the bot's NLP capabilities to enhance its understanding of complex legal language, user queries, and context. This could involve integrating more advanced NLP models or fine-tuning existing ones.

#### Case Prediction and Analytics:
Integrate machine learning algorithms to enable the bot to predict case outcomes based on historical data and legal precedents. Provide analytics and insights into potential legal strategies based on the analysis of similar cases.

#### Dynamic Legal Research:
Expand the bot's legal research capabilities by integrating with legal databases, academic journals, and real-time legislative updates. Enable the bot to provide up-to-date information on changes in laws and regulations.

#### Multilingual Support:
Extend the bot's language capabilities to support multiple languages, making it accessible to a broader audience. This could involve language translation features and cross-cultural legal insights.

#### Voice and Chat Interface:
Implement voice recognition and interaction capabilities, allowing users to communicate with the bot through voice commands. Enhance the chat interface to support more interactive and dynamic conversations.

#### Integration with Legal Systems:
Integrate the bot with legal case management systems, document repositories, and other tools commonly used by legal practitioners. This ensures a seamless workflow and improves collaboration within legal teams.

#### Legal Compliance Checker:
Develop a feature that checks legal documents for compliance with specific laws and regulations. Provide suggestions for modifications to ensure compliance and reduce legal risks.

#### User Authentication and Authorization:
Implement secure user authentication and authorization mechanisms, especially if dealing with sensitive legal information. Ensure that the bot complies with data protection and privacy regulations.

#### Continuous Learning and Feedback Mechanism:
Incorporate a feedback loop to allow users to provide input on the bot's responses. Use this feedback to continuously improve the bot's accuracy and relevance.

#### Expand to Other Legal Domains:
Consider expanding the bot's capabilities to cover a broader range of legal domains (e.g., family law, intellectual property, corporate law) to cater to a diverse user base.

45 changes: 45 additions & 0 deletions meraki/bot.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
import streamlit as st
import random
import time

st.title("Legal Assistant Bot")


from my_pipeline import MyHaystackPipeline # Import your Haystack pipeline

# Create an instance of your Haystack pipeline
# pipeline = MyHaystackPipeline()

# Initialize chat history
if "messages" not in st.session_state:
st.session_state.messages = []

# Display chat messages from history on app rerun
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])

# Accept user input
if prompt := st.chat_input("Tell me about a particular case"):
# Add user message to chat history
st.session_state.messages.append({"role": "user", "content": prompt})
# Display user message in chat message container
with st.chat_message("user"):
st.markdown(prompt)

# Display assistant response in chat message container
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
# ans = pipeline.run(query = prompt)
assistant_response = MyHaystackPipeline(query=prompt)

# Simulate stream of response with milliseconds delay
for chunk in assistant_response.split():
full_response += chunk + " "
time.sleep(0.05)
# Add a blinking cursor to simulate typing
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)
# Add assistant response to chat history
st.session_state.messages.append({"role": "assistant", "content": full_response})
Binary file added meraki/diagram.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
62 changes: 62 additions & 0 deletions meraki/my_pipeline.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@

# from pdfminer.high_level import extract_text
from haystack.nodes import PromptNode
# from haystack.pipelines import GenerativeQAPipeline
# from haystack.utils import print_answers,convert_files_to_docs
import os
# from haystack import Pipeline
from haystack.nodes.prompt import PromptNode, PromptModel
from haystack.nodes import PromptNode, PromptTemplate, AnswerParser
# from haystack.nodes import BM25Retriever
from haystack.pipelines import Pipeline
from haystack.schema import Document

os.environ['FAISS_NO_AVX2'] = '1'

from langchain.document_loaders import TextLoader
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS





def MyHaystackPipeline(query):
print(query)



rag_prompt = PromptTemplate(
prompt="""Provide a clear and concise response that summarizes the key points and information presented in the text.
Use an unbiased and journalistic tone. Do not repeat text, in short way possible within 70 words.
\n\n Related text: {join(documents)} \n\n Question: {query} \n\n Answer:""",

output_parser=AnswerParser(),
)
loader = TextLoader("Datasets/case-123.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
# docs = text_splitter.split_documents(documents)

embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
docs = text_splitter.split_documents(documents)

db = FAISS.from_documents(docs, embeddings)
docs = db.similarity_search(query)
print(docs[0].page_content)

prompt_node = PromptNode(
model_name_or_path="mistralai/Mistral-7B-v0.1 OR PATH TO THE MODEL", api_key='YOUR_API_KEY', default_prompt_template=rag_prompt, max_length=350
)


pipe = Pipeline()
# pipe.add_node(component=retriever, name="retriever", inputs=["Query"])
pipe.add_node(component=prompt_node, name="prompt_node", inputs=["Query"])
ans = pipe.run(query=query,documents=[Document(docs[0].page_content)])
print(ans["answers"][0].answer)
return ans["answers"][0].answer

# print(MyHaystackPipeline("Tell me about Olivia Thompson case?"))

1 change: 1 addition & 0 deletions similarity_search.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@