All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SN1 , don't attach a new request to an old one, even if on the same topic, because probably you'll never receive an answer. Open a new case describing your issue. Ciao. Giuseppe
Hi @bosseres , Please describe how you solved the topic and close the case for the oother people of the Community. Giuseppe P.S.: Karma Points are appreciated
Hi @max-ipinfo  Were you able to find anything in $SPLUNK_HOME/var/log/splunkd.log relating to this file and the 500 error? You could also try running   $SPLUNK_HOME/bin/splunk cmd python3 /opt/sp... See more...
Hi @max-ipinfo  Were you able to find anything in $SPLUNK_HOME/var/log/splunkd.log relating to this file and the 500 error? You could also try running   $SPLUNK_HOME/bin/splunk cmd python3 /opt/splunk/etc/apps/ipinfo_app/bin/debug_endpoint.py  to check that the python file has no syntax errors - you might not get an output if it works, but you may well get an error if there is an issue.. Its also worth checking the ownership and permissions on this file on the filesystem. If you still have no success feel free to share the python file contents and we can continue to debug with you. Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
 Hi could you please tell me how did you resolve this issue as i am having the same issue as well. Thank You.
@myitlab42000  If you registered the account using your business email, your organization may sometimes encounter issues during registration, as some companies block external emails. If this happens... See more...
@myitlab42000  If you registered the account using your business email, your organization may sometimes encounter issues during registration, as some companies block external emails. If this happens, check your spam folder and consider opening a support ticket for assistance.
@richard8  This is your cron expression. Your cron expression 30 9 1-7 * 0 is not quite right because it triggers on any date (1-7) that is a Sunday (0), meaning it runs on every Sunday within the f... See more...
@richard8  This is your cron expression. Your cron expression 30 9 1-7 * 0 is not quite right because it triggers on any date (1-7) that is a Sunday (0), meaning it runs on every Sunday within the first seven days of the month. However, if the 1st of the month is not a Sunday, it will still run on other Sundays within that range.  
You need to be more specific about your requirements.  Based on the sample you provided, what is the input and expected output of a query for sender?  What is the input and expected output of a query... See more...
You need to be more specific about your requirements.  Based on the sample you provided, what is the input and expected output of a query for sender?  What is the input and expected output of a query for recipients?  Are you combining given values of sender, recipients, sender's IP address in one query and expect some specific output?  Or are you expecting to give an input of a sender (email), and find out all recipients the sender has sent and the IP addresses this sender has used?  How does "X and S have the same values for given single message in the logs and will change from message to message" affect the outcome?  Is this information even relevant to your quest? (It didn't help that your sample data contains one X value and one S value.) There are a million different ways to interpret "query to pull sender (env_from value), recipient(s) (env_rcpt values) and IP address;" this, combined with dozens of ways to implement each interpretation, it is impossible for volunteers to help you. If you mean to say that a unique X, S combination marks one unique E-mail transaction, and you want to base your search on X and S values, all you need is from, ip, and rcpt.  Something like this:   | stats values(from) as sender values(ip) as ip values(rcpt) as recipients by s x   Your sample data should give s x sender ip recipients 44pnhtdtkf 44pnhtdtkf-1 sender@company.com 10.10.10.10 recipient.one@company.com recipient.two@DifferentCompany.net Is this what you are looking for? Here is an emulation of your sample.  Play with it and compare with real data.   | makeresults | eval data = split("Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.436109+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mail cmd=env_from value=sender@company.com size= smtputf8= qid=44pnhtdtkf-1 tls= routes= notroutes=tls_fallback host=host123.company.com ip=10.10.10.10 Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.438453+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mail cmd=env_rcpt r=1 value=recipient.two@DifferentCompany.net orcpt=recipient.two@DifferentCompany.NET verified= routes= notroutes=RightFax,default_inbound,journal Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.440714+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mail cmd=env_rcpt r=2 value=recipient.one@company.com orcpt=recipient.one@company.com verified= routes=default_inbound notroutes=RightFax,journal Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.446326+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=session cmd=data from=sender@company.com suborg= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.446383+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=session cmd=data rcpt=recipient.two@DifferentCompany.net suborg= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.446405+00:00 host filter_instance1[1394]: rprt s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=session cmd=data rcpt=recipient.one@company.com suborg= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.446639+00:00 host filter_instance1[1394]: info s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=session cmd=data rcpt_routes= rcpt_notroutes=RightFax,journal data_routes= data_notroutes= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.450566+00:00 host filter_instance1[1394]: info s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=session cmd=headers hfrom=sender@company.com routes= notroutes= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.455141+00:00 host filter_instance1[1394]: info s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mimelint cmd=getlint lint= Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.455182+00:00 host filter_instance1[1394]: info s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mimelint cmd=getlint mime=1 score=0 threshold=100 duration=0.000 Feb 11 10:04:12 host.company.com 2025-02-11T15:04:12.455201+00:00 host filter_instance1[1394]: info s=44pnhtdtkf m=1 x=44pnhtdtkf-1 mod=mimelint cmd=getlint warn=0", " ") | mvexpand data | rename data as _raw | extract ``` data emulation above ```  
I maintain IPinfo's Splunk App: https://splunkbase.splunk.com/app/4070 Our customers have recently reported that our application doesn't work when Splunk Enterprise Security is enabled. For context... See more...
I maintain IPinfo's Splunk App: https://splunkbase.splunk.com/app/4070 Our customers have recently reported that our application doesn't work when Splunk Enterprise Security is enabled. For context, our application uses one of two modes to interact with our data: 1) queries our API directly 2) downloads our datasets locally using a public HTTPS endpoint The failure only happens in the second mode, when we have to make REST calls to coordinate the download of our data. One key finding in my early investigation is that our Splunk application communicates using raw non-SSL-verified HTTPS requests (i.e. using the requests Python library with verify=False), authenticated by session keys. Splunk Entreprise Security seems to prevent these types of communication. To bypass this restriction, I converted everything over to the Splunk Python SDK, which bypasses all of these SSL issues. I quickly realized that, to leverage the Splunk Python SDK in all scenarios and with consistency, it would just be easier to use bearer tokens throughout, so the second change I made was leveraging bearer tokens for REST communications. Despite these two changes, the application still doesn't work with Splunk Entreprise Security enabled. It works without a problem when it is disabled (for example, when testing in the Docker Splunk dev environment). I've also tried to build a simple debug handler based on splunk.rest.BaseRestHandler. When I try to call it directly with Splunk Entreprise Security enabled, I get the following error: ERROR - HTTP 500 Error starting: Can't load script "/opt/splunk/etc/apps/ipinfo_app/bin/debug_endpoint.py" -- Error starting: Can't load script "/opt/splunk/etc/apps/ipinfo_app/bin/debug_endpoint.py" I haven't been able to track this particular error in Splunk forums or other forums on the Internet.  If anyone has insight on this problem, I would appreciate any help. Thank you!        
Hi @MichaelM1, Does your test script fail at ~1000 connections when sending a handshake directly to the intermediate forwarder input port and not your server script port? Completing a handshake and ... See more...
Hi @MichaelM1, Does your test script fail at ~1000 connections when sending a handshake directly to the intermediate forwarder input port and not your server script port? Completing a handshake and sending no data while holding the connection open should work. The splunktcp input will not reset the connection for at least (by default) 10 minutes (see the inputs.conf splunktcp s2sHeartbeatTimeout setting). It still seems as though there may be a limit at the firewall specific to your splunktcp port, but the firewall would be logging corresponding drops or resets. The connection(s) from the intermediate forwarder to the downstream receiver(s) shouldn't directly impact new connections from forwarders to the intermediate forwarder, although blocked queues may prevent new connections or close existing ones. Have you checked metrics.log on the intermediate forwarder for blocked=true events? A large number of streams moving through a single pipeline on an intermediate forwarder will likely require increasing queue sizes or adding pipelines.
@darrfang When I tried I think it listed the two pairs within the same row in the table, whereas the second mvexpand broke them into their own rows. I guess it depends what you're going to do with t... See more...
@darrfang When I tried I think it listed the two pairs within the same row in the table, whereas the second mvexpand broke them into their own rows. I guess it depends what you're going to do with the data but if you wanted to sort or filter you might want them expanded further?  Either way, Im glad it worked out for you - Thanks for letting me know
I am not entirely clear on your requirement. Let's assume for arguments sake that M is 4 and N is 6. If a user accesses the same account for sensitive information for 5 days in a row, does that coun... See more...
I am not entirely clear on your requirement. Let's assume for arguments sake that M is 4 and N is 6. If a user accesses the same account for sensitive information for 5 days in a row, does that count as 1 visit for all 5 days or only for days 4 and 5? And does it only count if at least 4 of the days are in the sliding window of 6 days, or when any of the 5 days are in the 6 day window?
Excellent - mvmap is very powerful when handling MV fields. One small point I noticed on your use of single quotes to wrap field names. Using single quotes is essential where field names start with ... See more...
Excellent - mvmap is very powerful when handling MV fields. One small point I noticed on your use of single quotes to wrap field names. Using single quotes is essential where field names start with numbers or have spaces or other non-ascii characters, although some characters are ok, e.g. underscore, otherwise the search just won't work. Here, you have one use substr(session_time, ...) where you are not using single quotes, and it works because it's not needed for either of the two fields names here, so it's good to be aware of when it's necessary on the right hand side of an eval. When you know the field name, you can make the call, but a command that is very useful is foreach and when using templated fields <<FIELD>>, it's generally good practice to always wrap that usage.  
Hi @livehybrid this works like magic! thanks a lot for giving  me the insights! Just wondering what's the reason at here that you did mvexpand twice just did some test seems that  if I remove `| m... See more...
Hi @livehybrid this works like magic! thanks a lot for giving  me the insights! Just wondering what's the reason at here that you did mvexpand twice just did some test seems that  if I remove `| mvexpand data_value` I can still get the same results / format
Okay, reporting back. Your advice was sound. I managed to drop the mvexpand all together by substituting in this bit of logic at the end. | stats values(dhcp_host_name) as Hostname values(Switch... See more...
Okay, reporting back. Your advice was sound. I managed to drop the mvexpand all together by substituting in this bit of logic at the end. | stats values(dhcp_host_name) as Hostname values(Switch) as Switch values(Port) as Port values(ip_add) as IP_Address values(mac_add) as MAC_Address list(session_time) as session_time by Session_ID, Auth | eval Time=mvmap('session_time', if(substr('session_time', 1, len('Session_ID'))=='Session_ID', substr(session_time, len('Session_ID')+2), null())) | table Time Hostname IP_Address MAC_Address Switch Port Auth | sort -Time   Just so you have an example of what my data might look like for one event.    Session_ID=  "7D5A007C1B294E" session_time= "7D5A007C1B294E,2025-02-11 12:56:51"                                "9DE81CAB15DD46,2025-02-06 15:22:13" By using mvmap, I itterate through session_time and check each value to find which equals that events Session_ID. Then I extract the Time field from that value.
In Securonix's SIEM, we can manually create cases through Spotter by generating an alert and then transferring those results into an actual incident on the board. Is it possible to do something simil... See more...
In Securonix's SIEM, we can manually create cases through Spotter by generating an alert and then transferring those results into an actual incident on the board. Is it possible to do something similar in Splunk? Specifically, I have a threat hunting report that I've completed, and I'd like to document it in an incident, similar to how it's done in Securonix. The goal is to extract a query from the search results, create an incident, and generate a case ID to help track the report. Is there a way to accomplish this in Splunk so that it can be added to the incident review board for documentation and tracking purposes?
I believe some developer license requests go to an approver, so this might be the cause of the delay. Is your email account associated with a Splunk subscription/license (Cloud or Enterprise) ? Ple... See more...
I believe some developer license requests go to an approver, so this might be the cause of the delay. Is your email account associated with a Splunk subscription/license (Cloud or Enterprise) ? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
I would suggest checking your spam folder, but also note that the setup of your Splunk Cloud Trial stack might take a little time, how long has it been since you requested this? Has it been more than... See more...
I would suggest checking your spam folder, but also note that the setup of your Splunk Cloud Trial stack might take a little time, how long has it been since you requested this? Has it been more than a couple of hours? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
Its not possible within Splunk to have a crontab for the first Sunday of the month, however.. you might be able to run it every day for first 7 days of the month (`30 9 1-7 * *`) and add the followin... See more...
Its not possible within Splunk to have a crontab for the first Sunday of the month, however.. you might be able to run it every day for first 7 days of the month (`30 9 1-7 * *`) and add the following to your search: | where strftime(now(),"%a")=="Sun" This will stop the search from continuing if it isnt Sunday... Does this help? Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
hi i request a license developper but i don't received an email. my email is valid because i received un email for resetting password   thank you
Hi I check the fields are correct but i want to know that using spath for extraction from json and using props and transforms gives same result or not? As I am getting same message value that was uns... See more...
Hi I check the fields are correct but i want to know that using spath for extraction from json and using props and transforms gives same result or not? As I am getting same message value that was unstructured earlier which was coming after using below statement in conf file   [securelog_override_raw] INGEST_EVAL = message := json_extract(_raw, "message") the value in message is still same unstructured with lots of messed up data like below do i have to separate this myself in this case using search queries? 2025-02-11 20:20:46.192 [com.bootstrapserver.runtim] DEBUG Stability run result : com.cmp.bootstrapserver.runtime.internal.api.RStability@36464gf 2025-02-11 20:20:46 [com.bootstrapserver.runtim] DEBUG Stability run result :com.cmp.bootstrapserver.runtime.interndal.api.RStability@373638cgf after spath same message came from message field and now using conf file with prof and tranforms its still the same. will it extract like this only?