All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

If you look at the picture I cant see the real time alert option, Could you please assist me to get this on my splunk ?
I created a HEC token call test_app initially for accepting log data from a test app.  That app has morphed into a prod app.  I would like to change the HEC token name to prod_app.  How do I do that?... See more...
I created a HEC token call test_app initially for accepting log data from a test app.  That app has morphed into a prod app.  I would like to change the HEC token name to prod_app.  How do I do that?  Thanks.
hello I try to use a base search between two single panel the first single panel is on the last 24 h and the second panel must be on the last 7 days but when i put  <earliest>-7d@h</earliest><late... See more...
hello I try to use a base search between two single panel the first single panel is on the last 24 h and the second panel must be on the last 7 days but when i put  <earliest>-7d@h</earliest><latest>now</latest> in the second panel I have a validation warning! what i have to do please? <row> <panel> <single> <search id="test"> <query>index=toto sourcetype=tutu | fields signaler | stats dc(signaler)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </single> </panel> <panel> <single> <search base="test"> <query>| stats dc(signaler)</query> </search> </single> </panel>  
Hello All, I have a query that searches the Windows Security Logs and shows results in the following format using a stats function .  As you can see , i am grouping connection attempts from multiple... See more...
Hello All, I have a query that searches the Windows Security Logs and shows results in the following format using a stats function .  As you can see , i am grouping connection attempts from multiple users to a particular Dest . Also,  the "Connection Attempts" takes into account the total # for all the users listed under "User" per row.   index=xxx source="WinEventLog:Security" EventCode=4624 | stats values(dest_ip), values(src), values(src_ip),values(user), dc(user) as userCount, count as "Connection Attempts" by dest   Dest Dest_IP Src SRC_IP userCount User  Connection Attempts XX XXXX XXX XXX 3 User A User B User C 9 XX XXXX XXX XXX 2 User D User E 78                 I would like to show how many connection attempts were made by each user.  How to segregate this data per user ?
Hi all, I have a xml file as below. <?xml version="1.0" encoding="UTF-8"?> <suite name="abc" timestamp="20.08.2021 15:47:20" hostname="kkt2si" tests="5" failures="1" errors="1" time="0"> <case nam... See more...
Hi all, I have a xml file as below. <?xml version="1.0" encoding="UTF-8"?> <suite name="abc" timestamp="20.08.2021 15:47:20" hostname="kkt2si" tests="5" failures="1" errors="1" time="0"> <case name="a" time="626" classname="x"> <failure message="failed" /> </case> <case name="b" time="427" classname="x" /> <case name="C" time="616" classname="y" /> <case name="d" time="626" classname="y"> <error message="error" /> </case> <case name="e" time="621" classname="x" /> </suite>   The cases which doesnt have failure or errors are the ones which are passed. I am able to make a list of cases but i am confused how to add a column of the status. Anyone know the solution for this? |spath output=cases path=suite.case{@name}| table cases This is how i extracted the cases. I want to add a column which shows the status. Please suggest some answers.  
We recently upgraded to Splunk Enterprise 8.2.2 and we just had a license expire in a lower environment and never saw an alert.  Upon investigation, it looks like the search for "DMC Alert - Expired ... See more...
We recently upgraded to Splunk Enterprise 8.2.2 and we just had a license expire in a lower environment and never saw an alert.  Upon investigation, it looks like the search for "DMC Alert - Expired and Soon To Expire Licenses" may have an issue.   In the search below, if I update "| where has_valid_license == 0"  to "| where has_valid_license == 1" , it displays the expired alert in the search results.  It doesn't appear this search was changed, and it is the same in all our Monitoring Console instances.  The alert was working last month before we upgraded on 7.2.x.  Has anyone else seen the same thing?       | rest splunk_server_group=dmc_group_license_master /services/licenser/licenses \ | join type=outer group_id splunk_server [ \ rest splunk_server_group=dmc_group_license_master /services/licenser/groups \ | where is_active = 1 \ | rename title AS group_id \ | fields is_active group_id splunk_server] \ | where is_active = 1 \ | eval days_left = floor((expiration_time - now()) / 86400) \ | where NOT (quota = 1048576 OR label == "Splunk Enterprise Reset Warnings" OR label == "Splunk Lite Reset Warnings") \ | eventstats max(eval(if(days_left >= 14, 1, 0))) as has_valid_license by splunk_server \ | where has_valid_license == 0 AND (status == "EXPIRED" OR days_left < 15) \ | eval expiration_status = case(days_left >= 14, days_left." days left", days_left < 14 AND days_left >= 0, "Expires soon: ".days_left." days left", days_left < 0, "Expired") \ | eval total_gb=round(quota/1024/1024/1024,3) \ | fields splunk_server label license_hash type group_id total_gb expiration_time expiration_status \ | convert ctime(expiration_time) \ | rename splunk_server AS Instance label AS "Label" license_hash AS "License Hash" type AS Type group_id AS Group total_gb AS Size expiration_time AS "Expires On" expiration_status AS Status    
I am looking to create a simple dashboard with fruit on the x-axis and amount on the y-axis based on the last event . When I try to list the amount, all the amounts get listed instead of the correspo... See more...
I am looking to create a simple dashboard with fruit on the x-axis and amount on the y-axis based on the last event . When I try to list the amount, all the amounts get listed instead of the corresponding fruit. Any help or documentation is appreciated { "Results": [     {         "Fruit": "Apple",         "amount": 9     },     {         "Fruit": "Orange",         "amount": 37     },     {         "Model": "Cherry",         "Amount": 27     },   ] }
I am working on migrating some items over to dashboard studio. I have a very simple stats command getting a few counts. One item I have is to just get an average response time, avg(responseTime). Whe... See more...
I am working on migrating some items over to dashboard studio. I have a very simple stats command getting a few counts. One item I have is to just get an average response time, avg(responseTime). When I put this into my search the column doesnt get results, other columns like count(eval(status=OK)) populate fine. Also if select to run the item as just a search it works fine and all my data shows. Anyone else have similar issues?
I have been pulling my hair out on this one all day. I have an accelerated data model that has two data sets: hostInfo networkInfo They are stand alone root searches. They do happen to share so... See more...
I have been pulling my hair out on this one all day. I have an accelerated data model that has two data sets: hostInfo networkInfo They are stand alone root searches. They do happen to share some fields like hostname. When running the searches in a normal splunk search window work perfectly fine. Example:       index=summary_host_info search_name="Host_Info" | fields hostname os cpu       However, only the first data set ever returns results from tstats. I've tested and swapped the two around. Example of a simple query I've been using to test:       | tstats count("hostInfo.hostname") FROM datamodel="endpoint_info" WHERE nodename="hostInfo"        There are no required fields, permissions seem fine and the data model summary is 10% built at around 1gb. I can even recreate the same data set and use that as the second one and that second identical data set will not return results.   Edit: I finally found a warning after clicking on "Datasets" at the top and clicking into one specifically:   Issue occurred with data model 'test.s3jaytest'. Issue: 'Failed to generate dmid' Reason: 'Error in 'DataModelCache': Invalid or unaccelerable root object for datamodel'. Failed to parse options. Clearing out read-summary arguments.   What does this mean and how do I fix it? I'm using root searches, not root events.  
I have a lookup sample.csv as follows whereas one of the host value is empty    Name  Host TEST_USER abc, def USER_1 * user_3 ghi   Now I use the lookup in a search. Now for th... See more...
I have a lookup sample.csv as follows whereas one of the host value is empty    Name  Host TEST_USER abc, def USER_1 * user_3 ghi   Now I use the lookup in a search. Now for the USER_1 Host I want to use the wild card. Using astrick symbol directly in the lookup doesn't working. Is there any way I can add a wild card for USER_1.  A little research on the Splunk docs gives me some inputs like I need to use props and transforms to do so. I don't have a props or transforms exists for that application. Can I create a condition in props, transforms just for the above purpose. If so what should be the stanzas should be in both the configuration files.    Any Help would be great.   
Hi All, i am using below query to get forwarder disk utilization .. but its not working .. index=os sourcetype=df host=de1secsplfwd002.dc-r.security.vodafone.com | strcat host '@' Filesystem Host_F... See more...
Hi All, i am using below query to get forwarder disk utilization .. but its not working .. index=os sourcetype=df host=de1secsplfwd002.dc-r.security.vodafone.com | strcat host '@' Filesystem Host_FileSystem | timechart avg(UsePct) by Host_FileSystem basically our forwarder disk space is getting filled because of  some specific intelligence logs.. here we want to highlight respective team that because of their logs its getting sudden surge logs..      
Hello there I'm trying to prepare a dashboard that will query indexes for latest events during a given period (let's say - last 30 minutes) from a list of event sources and will warn users if the la... See more...
Hello there I'm trying to prepare a dashboard that will query indexes for latest events during a given period (let's say - last 30 minutes) from a list of event sources and will warn users if the latest events are older than a given threshold (or maybe I'll apply some more sophisticated logic later; I don't know yet). I also want to know if there are no events whatsoeer The problem is - I don't just want to query everything - I have a lookup  that defines my event sources to monitor. Depending on the type of the source I might distinguish the source by index/host pair, index/source pair; there may be some other method in the future but for now that's it. So what is my problem now? The problem is that I don't like my solution - it's kinda ugly. I need to first do a subsearch with inputlookup to define a set of conditions for tstats, then I have to transform (and probably aggregate some results since - for example - for file-based sources I can have multiple results if I do a tstats over index/source/host trio) and after that I have to do a inputlookup again to create a zero-valued fallback to aggregate with tstats result. So effectively I have something with general structure of: | tstats [ | inputlookup | eval/whatever/prepare conditions] | stats/transform/whatever | append [ | inputlookup | eval/whatever/prepare ] | stats sum and tidy the results | check_for_zeros, check threshold and so on... That's the general idea. It should work but I don't really like the fact that I need to use subsearched inputlookup twice and results of those subsearches will be - I suppose - highly similar to each other. Any idea if it can be performed in a more "tidy" way?
I have Splunk Ent. (8.0.X) & ES (6.4.X). THE UFs are 7.x.x. It looks like I have to upgrade UFs to 8.0.x then to 8.2.2.1 first Correct? Then upg the Splunk instances to 8.2.2.1. Right? Please share s... See more...
I have Splunk Ent. (8.0.X) & ES (6.4.X). THE UFs are 7.x.x. It looks like I have to upgrade UFs to 8.0.x then to 8.2.2.1 first Correct? Then upg the Splunk instances to 8.2.2.1. Right? Please share step by step upgrade to 8.2.2.1 if yo have them. Thank u very much.
Hello Splunk Gurus,    For a given dashboard, which has tables, I create text fields/drop-down to filter table data. Which of-course takes extra space on UI. I was wondering if Splunk provide the wa... See more...
Hello Splunk Gurus,    For a given dashboard, which has tables, I create text fields/drop-down to filter table data. Which of-course takes extra space on UI. I was wondering if Splunk provide the way to create filter on table header like excel without creating separate textbox/drop-down for filter. Any idea? For example, below table got created by this query index=micro host=app150*usa.com "API Timeline" | rex field=_raw "FirstCompTime:(?<FirstComp>[^\,]+)" | rex field=_raw "SecondCompTime:(?<SecondComp>[^\,]+)" | rex field=_raw "ThirdCompTime:(?<ThirdComp>[^\,]+)" | table FirstComp, SecondComp, ThirdComp FirstComp SecondComp ThirdComp 78 25 31 80 22 34 81 26 36   Now i want to create filter on table header; lets say on header name "ThirdComp" like excel as shown below. Thanks  
Hello guys!! help to write the request correctly. otherwise I don't understand how to do it right file.csv username ip_address_old id_old desti John 192.168.11.5 1234 abcd   index... See more...
Hello guys!! help to write the request correctly. otherwise I don't understand how to do it right file.csv username ip_address_old id_old desti John 192.168.11.5 1234 abcd   index = IndexName usernem ip_address_new id_new desti John 172.168.15.10 4321 bsir   Where id_old != id_new. output usernem ip_address_new id_new desti id_old John 172.168.15.10 4321 bsir 1234
I have a eval on a dashboard that used to work but it stopped and I havent been able to figure out why. On the dashboard im taking the _time and turning it into a human readable string using `strfti... See more...
I have a eval on a dashboard that used to work but it stopped and I havent been able to figure out why. On the dashboard im taking the _time and turning it into a human readable string using `strftime(_time, "%m/%d/%Y %H:%M:%S %Z")` and that works great. The problem comes in when I try to convert it back later for making a link to a search. For example: ``` <eval token="endTimestamp">relative_time(strptime($row.Timestamp$, "%m/%d/%Y %H:%M:%S %Z"), "+30m")</eval> ``` Used to work and return the unix time that I added 30m to, but now `strptime` just returns NaN but this is the right format. I've checked out all the Splunk docs and everything looks right but it still is broke. Any idea what I could be doing wrong? Here is the snippet from my field row im making: ``` <condition field="Search"> <eval token="startTimestamp">$row.Timestamp$</eval> <eval token="endTimestamp">relative_time(strptime($row.Timestamp$, "%m/%d/%Y %H:%M:%S %Z"), "+30m")</eval> <eval token="corKey">$row.Correlation Key$</eval> <link target="_blank">search?q=(index=### OR index=###) earliest=$startTimestamp$ latest=$endTimestamp$ correlationKey=$corKey$</link> </condition> ``` I have taken out everything but the $row.Timestamp$ and that returns something like `10/03/2021 07:41:27 PDT` which is the format that I put into it, I just cant do the reverse. I have copied and pasted the format from the `strftime` and still no luck converting it back so I can do math on it. Any suggestions?
Hi all, new user here. I was getting started on the tutorial and using the start searching page that came up after adding the data successfully I'm seeing behaviour I don't understand. The search i... See more...
Hi all, new user here. I was getting started on the tutorial and using the start searching page that came up after adding the data successfully I'm seeing behaviour I don't understand. The search index="splunktutorial" source="tutorialdata.zip:*"  "categoryid=sports" returns results but index="splunktutorial" source="tutorialdata.zip:*" categoryid="sports" or index="splunktutorial" source="tutorialdata.zip:*" categoryid=sports don't return results. To be more confusing I added the condition  action=purchase  to the search that returned results and it worked as expected to return results where the action was "purchase". https://docs.splunk.com/Documentation/SCS/current/Search/Quotations The splunk documentation for quotation says all string literals must be in double quotes but gives no examples where the field has to be included. Both categoryid and action are classified as strings. Any help understanding what is going on would be appreciated.
Is splunk ITSI and IT Essentials work require a paid subscription ? Is this available for Splunk cloud instance ? Splunk Cloud Version: 8.2.2107.2 Enterprise Security Version: 6.6.0
Hello,  In the past, I have used the checkbox method to hide panels after opening new ones. In this example, I would like to have a panel disappear after I click on the values in the panel. Currentl... See more...
Hello,  In the past, I have used the checkbox method to hide panels after opening new ones. In this example, I would like to have a panel disappear after I click on the values in the panel. Currently, my next panel appears on the click, but the existing panel still exists. I'm wondering if there is any way to hide the panel that exists after i click on what I want to pass as the token. XML preferred.   
Hello All, We have Microsoft log analytics add-on application installed into Splunk forwarder. With which we are ingesting all the Azure log analytics workspace logs into Splunk. But, Since few da... See more...
Hello All, We have Microsoft log analytics add-on application installed into Splunk forwarder. With which we are ingesting all the Azure log analytics workspace logs into Splunk. But, Since few days we have observed following pitfalls. 1. Delay in the azure logs ingestion into Splunk. 2. Duplicate entries of azure logs. And on investigation, we identified the following connection errors. Errors: 10-21-2021 13:44:42.789 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py" raise ConnectionError(err, request=request) 10-21-2021 13:44:42.789 +0000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/TA-ms-loganalytics/bin/log_analytics.py" ConnectionError: ('Connection aborted.', BadStatusLine("''",)) Can anyone help us out with the following ask? 1. What is the cause behind this error? 2. How can we resolve this error/issue and get all the azure logs without delay?