All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Greetings Splunkers, I have recently started having triggered alerts from a couple of correlation searches that when attempting to fix or troubleshoot the specific rule, the query would actually fai... See more...
Greetings Splunkers, I have recently started having triggered alerts from a couple of correlation searches that when attempting to fix or troubleshoot the specific rule, the query would actually fail for errors relating to the query itself (example: unescaped slashes, lookups that do not exist etc.) How do those Notables even trigger if the query itself fails? How do I audit changes done to a correlation search to make sure no changes were done to the rule? Thanks, Regards,
Hi, How can I extract pattern of raw data like pattern tab in splunk search?     Thanks
When i convert following timestamp to human readable format i am getting "12/31/9999 23:59:59" instead of '01/04/22 06:03:47' "timestamp": 1641294227243 I'm using strftime(timestamp,"%m/%d/%Y %H:%M... See more...
When i convert following timestamp to human readable format i am getting "12/31/9999 23:59:59" instead of '01/04/22 06:03:47' "timestamp": 1641294227243 I'm using strftime(timestamp,"%m/%d/%Y %H:%M:%S") function for the conversion. Could you please help me to find out the right conversion method? Thanks in advance! 
I use a lookup to define alert/SLO specifications. I use the lookups as input filters to my alert searches where I can. The lookup column name is sli_dimensions_alert: (there are other columns in th... See more...
I use a lookup to define alert/SLO specifications. I use the lookups as input filters to my alert searches where I can. The lookup column name is sli_dimensions_alert: (there are other columns in the lookup): sli_dimensions_alert="env,service_name,type,class" The sli_dimensions_alert field specification can have multiple comma separated values. For example: sli_dimensions_alert="env,service_name,type,class" My goal is to create an alert_name based on that CSV value list. Example raw data: env="PRD" service_name="EXGMGR" type="ERROR" class="TIMEOUT" I want to create a macro, calculated field or automatic lookup to transform sli_dimensions_alert="env,service_name,type,class" into alert_name="PRD-EXGMGR-ERROR-TIMEOUT". I've tried a variety of combinations with split, mvjoin, mvmap, but haven't found a way to make it work.
I want to divide different multi-values based on IP. Current results: IP date event risk 1.1.1.1 2022-01-01 2022-01-02 apache struts ipv4 fragment high row   my search: m... See more...
I want to divide different multi-values based on IP. Current results: IP date event risk 1.1.1.1 2022-01-01 2022-01-02 apache struts ipv4 fragment high row   my search: mysearch | mvexpand date | mvexpand event | mvexpand risk | table ip date event risk reuslt: IP date event risk 1.1.1.1 2022-01-01 apache struts high 1.1.1.1 2022-01-01 apache struts row 1.1.1.1 2022-01-01 ipv4 fragment high 1.1.1.1 2022-01-01   ipv4 fragment row 1.1.1.1 2022-01-02 apache struts high 1.1.1.1 2022-01-02 apache struts row 1.1.1.1 2022-01-02 ipv4 fragment high 1.1.1.1 2022-01-02 ipv4 fragment row   I want IP date event risk 1.1.1.1 2022-01-01 apache struts high 1.1.1.1 2022-01-02 ipv4 fragment row please help me...
After updating the Splunk Add-On for AWS to 5.2.1 we are no longer receiving Cloudtrail data through a proxy server.  The message from the _internal index is "message="Warning: This message does not ... See more...
After updating the Splunk Add-On for AWS to 5.2.1 we are no longer receiving Cloudtrail data through a proxy server.  The message from the _internal index is "message="Warning: This message does not have a valid SNS Signature <urlopen error [Errno 110] Connection timed out>".  If I bypass the proxy and allow outbound connections from the Splunk server on port 443 (with the proxy enabled in both the addon and server.conf) it is able to retrieve the data.  We are running Splunk Enterprise 8.2.3.2 on a single instance.
We have a ton or reports on the Splunk Ent. & I need to find if any are not finishing due to an error. Some reports are large in size ( the out put is large). Thank u & Happy 2022
I have created an IP choropleth map that correctly shows colors and numbers. I then save it as a dashboard. When no data is loaded on the dashboard yet I am able to hover mouse over each country and ... See more...
I have created an IP choropleth map that correctly shows colors and numbers. I then save it as a dashboard. When no data is loaded on the dashboard yet I am able to hover mouse over each country and the tooltip shows the Country name + 0 IPs correctly. After any data has begun to load, the mouse tooltip shows the country + # IPs for the 1st country that the mouse hovers over, even if I hover the mouse over other countries. Is this a bug? Am I doing something wrong? Splunk version: 8.2.3 on Linux.   Thanks in Advance! 
Hello, We 've got some problem with the service status part of the Splunk Add-on for Microsoft Office 365, since monday evening. The TA failed to get data and report the following message: "splun... See more...
Hello, We 've got some problem with the service status part of the Splunk Add-on for Microsoft Office 365, since monday evening. The TA failed to get data and report the following message: "splunk_ta_o365.common.portal.O365PortalError: 403:Please use MSGraph to access this resource https://docs.microsoft.com/en-us/graph/api /resources/service-communications-api-overview?view=graph-rest-1.0&preserve-view=true" . I've seen a link on microsoft doc "https://docs.microsoft.com/en-us/office/office-365-management-api/office-365-service-communications-api-reference" which says that the "Office 365 Service Communications API" will be retired and is replace by MS Graph API Do you know if an updated TA would be post soon ? Thanks
I need to make sure that a file is delivered every 10 minutes.  It always arrives 5 seconds after the top of the 10 min mark (6:00:05, 6:10:05... 6:50:05, 7:00:05 etc.)  between 6am-3pm on weekdays. ... See more...
I need to make sure that a file is delivered every 10 minutes.  It always arrives 5 seconds after the top of the 10 min mark (6:00:05, 6:10:05... 6:50:05, 7:00:05 etc.)  between 6am-3pm on weekdays.   This is the closest thing I've been able to come up with   */11 6-15 * * 1-5   I can't use */10 because the file arrives 5 seconds after the 10 minute marks, so I used 11 and set the time range as 5 minutes so that last run of the hour catches the XX:50:05 file.  The problem is that this solution always misses the first file that arrives at the top of the hour (XX:00:05) since it runs every 11 minutes.   For whatever reason, at the beginning of each hour it runs immediately but then misses the first file since the file arrives 5 seconds later.  Can anyone think of a better solution or do I just have to create a second alert for those top-of-the-hour files? I can't seem to find a way to delay the search by a few seconds.  And how can I mute the erroneous triggers from the first alert?
I have a ton or reports on the Ent. & like to synch them with ES to save time recreating them. Which is better synching or cloning? I 'd like to Synch them. Please advise. Thx & Happy 2022.
Hi, Is anyone syncing detection content (searches) on SIEM Rules (https://www.siemrules.com/) to their Splunk instance? I'm looking at using their API to build an integration (https://docs.siemru... See more...
Hi, Is anyone syncing detection content (searches) on SIEM Rules (https://www.siemrules.com/) to their Splunk instance? I'm looking at using their API to build an integration (https://docs.siemrules.com/developers/api-intro), but wondering if anything already existed? No apps on Splunkbase.
This code import splunklib.client as client host = "127.0.0.1" port = "8000" username = "---" password = "----" service = client.connect(username=username,password=password,host=host,port=port,... See more...
This code import splunklib.client as client host = "127.0.0.1" port = "8000" username = "---" password = "----" service = client.connect(username=username,password=password,host=host,port=port,scheme = https) for app in service.apps: print(app.name) produces  SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1125)  
Hi All, I am completely newbie into this splunk I wanted to know how to create reports in splunk that will provide daily log sources  Reporting and events count in a perticular time frame or last ... See more...
Hi All, I am completely newbie into this splunk I wanted to know how to create reports in splunk that will provide daily log sources  Reporting and events count in a perticular time frame or last 24 hours please help me here thank you for your support   
Hi everyone, I have an error on my splunk with the below description: "The lookup table '*' does not exist or is not available." The lookup name is not mentioned, and the only thing I have is the ... See more...
Hi everyone, I have an error on my splunk with the below description: "The lookup table '*' does not exist or is not available." The lookup name is not mentioned, and the only thing I have is the '*'. Can you please help me with ways to troubleshoot this, so I will be able to know the name of the lookup and try to figure out where it is used?  I have looked both in the _internal and the _audit indexes but couldn't find much. Thanks 
I have a CSV file placed in a UF and the CSV data is as follows '"Name" "userid" "use location" "userdesignation"' Raj raj-123 Argentina Consultant  Now  I have written props and transforms as b... See more...
I have a CSV file placed in a UF and the CSV data is as follows '"Name" "userid" "use location" "userdesignation"' Raj raj-123 Argentina Consultant  Now  I have written props and transforms as below but still the header is being ingested    Props: [Sourcetype] Should_linemerge=false  Line_Breaker=([\r\n]+) NO_BINARY_CHECK=true  CHARSET= UTF-8  INDEXED_EXTRACTIONS=CSV  category=structured  description=Comma-separated value format. Set header and other settings in "Delimited Settings" disabled=false TRUNCATE=99999 DATETIME_CONFIG=CURRENT KV_MODE=none  HEADER_FIELD_LINE_NUMBER=1  TRANSFORMS-set=setnull      Transforms.conf  [setnull] REGEX=(^"NAME".*$) |(^\'\"NAME\".$) DEST_KEY=queue  FORMAT=nullQueue  Please let me know what changes has to be made so that header is not being ingested                                     
Dear Splunk team   I hope everything is well with you   I am writing this post to inform you that I tried to sign up at Splunk [hosam.shafik@lxt.ai ], to download Splunk SOAR community edition, b... See more...
Dear Splunk team   I hope everything is well with you   I am writing this post to inform you that I tried to sign up at Splunk [hosam.shafik@lxt.ai ], to download Splunk SOAR community edition, but I did not receive any download link or verification email, although I registered 3 days ago   Can you please assist me with this problem?
Hello Splunkers, i need help. I have multiline logs looking like:       01/04/22 03:00:00 MONITOR_RAP: blah blah: blah ; blah ; blah ; blah ; blah ; 01/04/22 07:00:00 MONITOR_RAP: blah blah: bl... See more...
Hello Splunkers, i need help. I have multiline logs looking like:       01/04/22 03:00:00 MONITOR_RAP: blah blah: blah ; blah ; blah ; blah ; blah ; 01/04/22 07:00:00 MONITOR_RAP: blah blah: blah ; blah ; blah ; blah ; blah ;         i ingest them with the following sourcetype stanza:       [mysourcetype] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true TRUNCATE = 1000 TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 17 TIME_FORMAT = %m/%d/%y %H:%M:%S       The Universal Forwarder monitors the Directory where the logs landing. The first ingestion succeded without problems but when new logs written in the logfile of today, the parsing made multiple events out of the new logentries. The monitor Stanza:       [monitor://<path>/*.log] disabled = 0 sourcetype = mysourcetype index = myindex       So the first couple events were parsed like it should but when new logs arrived splunk made multiple events like (the codeblocks represent one multiline event, each codeblock represents a wrong parsed event in splunk):     01/04/22 03:00:00 blah: blah ; blah ; blah ; blah; blah ;       What is wrong? Is it maybe a bug? I dont get it.
I'm getting a bit confused about onboarding "csv" files. The files are _mostly_ csv - they have a header with field names, they have comma-delimited fields, but they also have a kind of a footer con... See more...
I'm getting a bit confused about onboarding "csv" files. The files are _mostly_ csv - they have a header with field names, they have comma-delimited fields, but they also have a kind of a footer consisting of a line full of dashes followed by a line with "Total: number" in it. With "normal" input I'd just set a normal props/transform on HF which would route those lines into nullqueue and be done with it. I'm not sure though how it works with indexed extractions after reading https://docs.splunk.com/Documentation/Splunk/8.2.4/Data/Extractfieldsfromfileswithstructureddata#Caveats_to_extracting_fields_from_structured_data_files Can I simply do transforms for my sourcetype just as with any other sourcetype? And the other question is - the props.conf that I generated from my stand-alone instance that seems to parse the file properly looks like this: [ mycsv ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) NO_BINARY_CHECK=true CHARSET=UTF-8 INDEXED_EXTRACTIONS=csv KV_MODE=none category=Structured disabled=false pulldown_type=true TIME_FORMAT=%s TIMESTAMP_FIELDS=Time HEADER_FIELD_LINE_NUMBER=1  But in the production environment the file will be read by UF, then the data will be sent to HF and then to the indexers. Do I put all those settings into props.conf on UF or HF? Or do I split them between those two? I must admit that this whole indexed extraction thing is tricky and IMHO not described well enough.
Hi I am trying to count the number of jobs till now and want to show the daily trend using timechart command. Not able to get , may be I am messing up with span option for eg.. total jobs executed t... See more...
Hi I am trying to count the number of jobs till now and want to show the daily trend using timechart command. Not able to get , may be I am messing up with span option for eg.. total jobs executed till now is 100 and there is trend of 10 jobs increased today  tomorrow it should show 110 and trend of tomorrows increase job  command - index=.......... projects="*" job_id="*" | dedup job_id | timechart span=60d count In picture you can see that total events are shown 1688 , I need that as single value and daily trend over it