All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK, let's suppose M is 4 and N is 10, so a user could have two periods of continuous access of 4 days each within the 10 day period. What would the output look like then? Or is it that M*2 > N is al... See more...
OK, let's suppose M is 4 and N is 10, so a user could have two periods of continuous access of 4 days each within the 10 day period. What would the output look like then? Or is it that M*2 > N is always true?
@tdth  Yes, implementing CIS benchmarks to harden your Red Hat 9 servers can potentially impact your Splunk deployment if not carefully managed. What specific hardening measures are you planning... See more...
@tdth  Yes, implementing CIS benchmarks to harden your Red Hat 9 servers can potentially impact your Splunk deployment if not carefully managed. What specific hardening measures are you planning to apply? It's best to first implement CIS hardening in a UAT environment and thoroughly test its impact before deploying it in production.  
The expected output is: Assuming the start time is from January 1st, 2025 to January 6th, 2025, output: The earliest access time and latest access end time of the user, the username, department, and t... See more...
The expected output is: Assuming the start time is from January 1st, 2025 to January 6th, 2025, output: The earliest access time and latest access end time of the user, the username, department, and the number of times the account has been accessed So the second output from January 2, 2025 to January 7, 2025: The earliest access time and latest access end time of the user, the username, department, and the number of times the account has been accessed The following results follow this pattern...
So, what would your expected output look like in this instance?
Hello @KKuser Have you tried adding the events in the Investigation? - https://docs.splunk.com/Documentation/ES/8.0.2/User/StartInvestigation You can add multiple notable events in Investigation and... See more...
Hello @KKuser Have you tried adding the events in the Investigation? - https://docs.splunk.com/Documentation/ES/8.0.2/User/StartInvestigation You can add multiple notable events in Investigation and write down notes as well. Also, if you want to create Notable events from any raw event, you can simply click on Edit Event and Create Notable event right from the search. Please let me know if you have any questions on the same.
Hi @SplunkUser001 , where did you installed the add-on? it must be installed in the Forwarder and on the Search Head. Ciao. Giuseppe
Assuming M is 4 times (M represents the number of times the user accesses, assuming that the same account is accessed multiple times per day), N is 6 days (i.e. the period, assuming the data starts fr... See more...
Assuming M is 4 times (M represents the number of times the user accesses, assuming that the same account is accessed multiple times per day), N is 6 days (i.e. the period, assuming the data starts from the 1st, outputs a result on the 6th day, outputs a result on the 7th day, and so on), if the user accesses the same account for 5 consecutive days, it is counted as 5 times. Sliding is 6N+1N until the end.
1
Hi @Praz_123    You may be able to create a simple app to push out to your instances which runs a modular input to capture this, but in terms of out-of-the-box functionality, unfortunately this isn... See more...
Hi @Praz_123    You may be able to create a simple app to push out to your instances which runs a modular input to capture this, but in terms of out-of-the-box functionality, unfortunately this isnt available at the moment. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will  
Hi @jkamdar , cases as your is usually caused by a misconfiguration, for this reason I hint to better analyze your btool result, not using grep, The issue could be caused by two inputs or by a tran... See more...
Hi @jkamdar , cases as your is usually caused by a misconfiguration, for this reason I hint to better analyze your btool result, not using grep, The issue could be caused by two inputs or by a transformation in props.conf. Ciao. Giuseppe
Hi I am getting this error. Root Cause(s): More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all r... See more...
Hi I am getting this error. Root Cause(s): More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct i have used telnet as well and it is getting connected.
Hi @rahulkumar , the results using INDEXED_EXTRACTIONWS=JSON or spath should be the same. The advantage of the first one is that it's automatic and you don't need to use spath command at every sear... See more...
Hi @rahulkumar , the results using INDEXED_EXTRACTIONWS=JSON or spath should be the same. The advantage of the first one is that it's automatic and you don't need to use spath command at every search. The problem without transforming message in _raw is that the standard add-ons usually don't run with this data structure because it's different than the one they are waiting for. Ciao. Giuseppe
How can i find ulimit value/status for all server in monitoring console.
Hi @SN1 , don't attach a new request to an old one, even if on the same topic, because probably you'll never receive an answer. Open a new case describing your issue. Ciao. Giuseppe
Hi @bosseres , Please describe how you solved the topic and close the case for the oother people of the Community. Giuseppe P.S.: Karma Points are appreciated
Hi @max-ipinfo  Were you able to find anything in $SPLUNK_HOME/var/log/splunkd.log relating to this file and the 500 error? You could also try running   $SPLUNK_HOME/bin/splunk cmd python3 /opt/sp... See more...
Hi @max-ipinfo  Were you able to find anything in $SPLUNK_HOME/var/log/splunkd.log relating to this file and the 500 error? You could also try running   $SPLUNK_HOME/bin/splunk cmd python3 /opt/splunk/etc/apps/ipinfo_app/bin/debug_endpoint.py  to check that the python file has no syntax errors - you might not get an output if it works, but you may well get an error if there is an issue.. Its also worth checking the ownership and permissions on this file on the filesystem. If you still have no success feel free to share the python file contents and we can continue to debug with you. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
 Hi could you please tell me how did you resolve this issue as i am having the same issue as well. Thank You.
@myitlab42000  If you registered the account using your business email, your organization may sometimes encounter issues during registration, as some companies block external emails. If this happens... See more...
@myitlab42000  If you registered the account using your business email, your organization may sometimes encounter issues during registration, as some companies block external emails. If this happens, check your spam folder and consider opening a support ticket for assistance.
@richard8  This is your cron expression. Your cron expression 30 9 1-7 * 0 is not quite right because it triggers on any date (1-7) that is a Sunday (0), meaning it runs on every Sunday within the f... See more...
@richard8  This is your cron expression. Your cron expression 30 9 1-7 * 0 is not quite right because it triggers on any date (1-7) that is a Sunday (0), meaning it runs on every Sunday within the first seven days of the month. However, if the 1st of the month is not a Sunday, it will still run on other Sundays within that range.  
You need to be more specific about your requirements.  Based on the sample you provided, what is the input and expected output of a query for sender?  What is the input and expected output of a query... See more...
You need to be more specific about your requirements.  Based on the sample you provided, what is the input and expected output of a query for sender?  What is the input and expected output of a query for recipients?  Are you combining given values of sender, recipients, sender's IP address in one query and expect some specific output?  Or are you expecting to give an input of a sender (email), and find out all recipients the sender has sent and the IP addresses this sender has used?  How does "X and S have the same values for given single message in the logs and will change from message to message" affect the outcome?  Is this information even relevant to your quest? (It didn't help that your sample data contains one X value and one S value.) There are a million different ways to interpret "query to pull sender (env_from value), recipient(s) (env_rcpt values) and IP address;" this, combined with dozens of ways to implement each interpretation, it is impossible for volunteers to help you. If you mean to say that a unique X, S combination marks one unique E-mail transaction, and you want to base your search on X and S values, all you need is from, ip, and rcpt.  Something like this:   | stats values(from) as sender values(ip) as ip values(rcpt) as recipients by s x   Your sample data should give s x sender ip recipients 44pnhtdtkf 44pnhtdtkf-1 sender@company.com 10.10.10.10 recipient.one@company.com recipient.two@DifferentCompany.net Is this what you are looking for? Here is an emulation of your sample.  Play with it and compare with real data.   | makeresults | eval data = split("Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.436109+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mail cmd=env_from value=sender@company.com size= smtputf8= qid=44pnhtdtkf-1 tls= routes= notroutes=tls_fallback host=host123.company.com ip=10.10.10.10 Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.438453+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mail cmd=env_rcpt r=1 value=recipient.two@DifferentCompany.net orcpt=recipient.two@DifferentCompany.NET verified= routes= notroutes=RightFax,default_inbound,journal Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.440714+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mail cmd=env_rcpt r=2 value=recipient.one@company.com orcpt=recipient.one@company.com verified= routes=default_inbound notroutes=RightFax,journal Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.446326+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=session cmd=data from=sender@company.com suborg= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.446383+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=session cmd=data rcpt=recipient.two@DifferentCompany.net suborg= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.446405+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=session cmd=data rcpt=recipient.one@company.com suborg= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.446639+00:00 host filter_instance1[1394]: info s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=session cmd=data rcpt_routes= rcpt_notroutes=RightFax,journal data_routes= data_notroutes= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.450566+00:00 host filter_instance1[1394]: info s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=session cmd=headers hfrom=sender@company.com routes= notroutes= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.455141+00:00 host filter_instance1[1394]: info s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mimelint cmd=getlint lint= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.455182+00:00 host filter_instance1[1394]: info s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mimelint cmd=getlint mime=1 score=0 threshold=100 duration=0.000 Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.455201+00:00 host filter_instance1[1394]: info s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mimelint cmd=getlint warn=0", " ") | mvexpand data | rename data as _raw | extract ``` data emulation above ```