All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hey, I am setting up a Splunk Dev env. I have one indexer, one SH, and one forwarder. I have uninstalled and reinstalled the Dev Indexer. I am trying to set it up to use two different filesystems as ... See more...
Hey, I am setting up a Splunk Dev env. I have one indexer, one SH, and one forwarder. I have uninstalled and reinstalled the Dev Indexer. I am trying to set it up to use two different filesystems as cold/hot data.  The error im receiving when i restart Splunk is     Problem parsing indexes.conf: Cannot load IndexConfig: Cannot create index '_audit': path of homePath must be absolute ('$SPLUNK_HOME/data/audit/db') Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue       Im not sure how to set this up correctly. I reinstalled the indexer so i could fix the mounts and storage.  For the /export/opt/splunk/etc/system.local/indexes.conf, i have something like:     [default] homePath = $SPLUNK_DB/hot/$_index_name/db coldPath = $SPLUNK_DB/cold/$_index_name/colddb       For my Splunk_DB, I have tried to set it in the Splunk-Launch.conf, as shown below:     # Version 9.2.0.1 # Modify the following line to suit the location of your Splunk install. # If unset, Splunk will use the parent of the directory containing the splunk # CLI executable. # SPLUNK_HOME=/export/opt/splunk/ # By default, Splunk stores its indexes under SPLUNK_HOME in the # var/lib/splunk subdirectory. This can be overridden # here: # SPLUNK_DB=$SPLUNK_HOME/data/ # Splunkd daemon name SPLUNK_SERVER_NAME=Splunkd # If SPLUNK_OS_USER is set, then Splunk service will only start # if the 'splunk [re]start [splunkd]' command is invoked by a user who # is, or can effectively become via setuid(2), $SPLUNK_OS_USER. # (This setting can be specified as username or as UID.) # # SPLUNK_OS_USER PYTHONHTTPSVERIFY=0 PYTHONUTF8=1 ENABLE_CPUSHARES=true    
If you look under lookups,  it should show that those are all set and defined. So double check lookup up tables files / Lookup definitions / Automatic Lookups and check sysmon app context.  Also ch... See more...
If you look under lookups,  it should show that those are all set and defined. So double check lookup up tables files / Lookup definitions / Automatic Lookups and check sysmon app context.  Also check if there's another lookup with that name, sometimes I have seen another same name  #this should point to most of the sysmon TA code (transforms) or show another. /opt/splunk/bin/splunk cmd btool transforms list eventcode --debug  
| inputlookup SSE-default-data-inventory-products.csv | outputlookup data_inventory_products_lookup Credit to https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-Security-Essential-not-load... See more...
| inputlookup SSE-default-data-inventory-products.csv | outputlookup data_inventory_products_lookup Credit to https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-Security-Essential-not-loading-correctly/m-p/467564
I am having the exact same issue.  
I'm trying to set the Description field of a ServiceNow Incident ticket through Splunk, and the string I'm passing contains a newline (\n).  But when Splunk creates/updates the ticket, either through... See more...
I'm trying to set the Description field of a ServiceNow Incident ticket through Splunk, and the string I'm passing contains a newline (\n).  But when Splunk creates/updates the ticket, either through the snowincident command or an action alert, it will automatically escape the backslash character.   So after Splunk passes the info to snow, the underlying json of the ticket looks like this: {"description":"this is a \\n new line"} and my Description field looks like this: this is a \n new line Is this something that Splunk is doing, or the ServiceNow Add-On?  Does anyone know of a way to get around this?
Thanks for your speedy response and for helping me out @gcusello . Unfortunately, the average does not seem to return for this, any idea why?  I'm essentially trying to get a Status Indicator Pane... See more...
Thanks for your speedy response and for helping me out @gcusello . Unfortunately, the average does not seem to return for this, any idea why?  I'm essentially trying to get a Status Indicator Panel for this stat, like shown below.  
Hi @jthomasc , at first, put all the search terms in the main search to have more performant searches. then you have to use the timechart command, something like this: index=abc granttype=mobile m... See more...
Hi @jthomasc , at first, put all the search terms in the main search to have more performant searches. then you have to use the timechart command, something like this: index=abc granttype=mobile message="*Token Success*" | timechart span=1d avt(count) AS avg Ciao. Giuseppe
Sorry for the confusion. I have two sets of time range. One is made from time selector, that is used for return results happened in the range I'm interested in. The other is hard-coded in the query... See more...
Sorry for the confusion. I have two sets of time range. One is made from time selector, that is used for return results happened in the range I'm interested in. The other is hard-coded in the query. I want to force Splunk to search index A's events at most in a range of past 6 months to 06/01/24 (during this time, logs went to index A only), and B at most in range 06/01/24 to now. I want Splunk to auto find an intersection of this hard-coded range and the range from time selector.
Current query,  this shows the how many successful login attempts there have been. index=abc granttype=mobile | fields subjectid, message | search message="*Token Success*" | stats count I am... See more...
Current query,  this shows the how many successful login attempts there have been. index=abc granttype=mobile | fields subjectid, message | search message="*Token Success*" | stats count I am now looking to create a panel to show the daily average amount of successful login attempts across 7 days. Is anyone able to help me with  query please?     
Thanks for your reply! It's a dashboard, and we may need to run a query to check something as well. I agree with what you said, checking empty buckets wouldn't take too much time. I was assuming th... See more...
Thanks for your reply! It's a dashboard, and we may need to run a query to check something as well. I agree with what you said, checking empty buckets wouldn't take too much time. I was assuming the previous bucket is still getting some logs, and by ignoring logs after the transition date could be faster save me from removing duplicates. while in my case, I believe it should be empty.
Hi @Silah  good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I am running this. how can i append the IP form below query to test.csv index=<abc>[| inputlookup ip_tracking.csv | rename FDS AS MID | format ] | lookup test.csv test_IP as IP OUTPUT test_IP | eva... See more...
I am running this. how can i append the IP form below query to test.csv index=<abc>[| inputlookup ip_tracking.csv | rename FDS AS MID | format ] | lookup test.csv test_IP as IP OUTPUT test_IP | eval match=if('IP'== test_IP, "yes", "no")| search match=no | stats count by IP
This worked perfectly. Thanks @gcusello  really appreciate your help.
Hi @Silah , I saw your second messge only after my answer, plese try this: Let me understand: what's the value of status in Begin and End events? You have to check these conditions in the evals: ... See more...
Hi @Silah , I saw your second messge only after my answer, plese try this: Let me understand: what's the value of status in Begin and End events? You have to check these conditions in the evals: index=your_index status IN ("Begin", "End") | stats earliest(eval(if(status="Begin",_time,""))) AS Begin_time latest(eval(if(status="End",_time,""))) AS End_time BY UUID | eval diff=End_time-Begin_time | table UUID diff Ciao. Giuseppe
Hi @Silah , yes, it was a mistake! index=your_index status IN ("Begin", "End") | stats earliest(eval(status="Begin")) AS Begin_time latest(eval(status="End")) AS End_time BY UUID | e... See more...
Hi @Silah , yes, it was a mistake! index=your_index status IN ("Begin", "End") | stats earliest(eval(status="Begin")) AS Begin_time latest(eval(status="End")) AS End_time BY UUID | eval diff=End_time-Begin_time | table UUID diff anyway, you ha ve to separately check the two conditions (status="Begin" and status="End") to verify that you have in those events the status and UUID fields. You can also add to the final table command also the  Begin_time and End_time fields to see if they are present or not. Remember to use always quotes in the eval commands. Ciao. Giuseppe 
Hello! I have a dashboard with several visualization panels. One of these is linked to a search that pulls the Top 10 Source IPs by Log Activity. index="index_name" $token.source.address$ |fields so... See more...
Hello! I have a dashboard with several visualization panels. One of these is linked to a search that pulls the Top 10 Source IPs by Log Activity. index="index_name" $token.source.address$ |fields source_address |stats count by source_address |table source_address, count |rename source_address as "Source IP", count as "Count" |sort -Count | head 10   The token, $token.source.address$, is set by a text box on the dashboard for the bar visualization below. However, in addition to the correct value being shown, there are often other incorrect values shown as well.   There doesn't seem to be a pattern as to why this happens? Does anyone know why this may happen and how to correct it? Thanks!
Sorry I should have added that I tried listing the begin_time and end_time in the table also, and both values are simply "True" and not a time stamp
Hi @gcusello  Thank you, this gets me started. I assume that  | eval diff=End_time-Start_time  should actually be  | eval diff=End_time-Begin_time  as it is called Begin_time in the earliest ev... See more...
Hi @gcusello  Thank you, this gets me started. I assume that  | eval diff=End_time-Start_time  should actually be  | eval diff=End_time-Begin_time  as it is called Begin_time in the earliest eval of the Begin event in the Stats part It does sort of work, My search query is identifying 4000 events and the table lists out 2000 by their UUID, so it has accurately identified that there is a Begin and End pair for each UUID, however the "diff" field of the table is blank for all of them. When I check the field, the value of diff is "null".
Hi everyone, I was wondering if anyone had any suggestions on effective ways of pulling application data from Splunk Cloud into PowerBI Platform without using the Splunk ODBC driver? Our Business In... See more...
Hi everyone, I was wondering if anyone had any suggestions on effective ways of pulling application data from Splunk Cloud into PowerBI Platform without using the Splunk ODBC driver? Our Business Intelligence team is keen on enriching their data by integrating Splunk with PowerBI. We're aiming to ensure that this integration follows best practices and is both efficient and reliable.   Has anyone here successfully implemented this kind of integration? If so, could you share the approach you took, the tools or connectors you used, and any tips or challenges you encountered? Thanks in advance for your help! Patrick #powerbi #odbc #splunk #businessintelligence
Hi community, can anyone help me figure out the log which Get incorrect data after Update(both get and update will log the request and response). In my case, the data can be updated multiple times. I... See more...
Hi community, can anyone help me figure out the log which Get incorrect data after Update(both get and update will log the request and response). In my case, the data can be updated multiple times. I need to guarantee all the Get can get correct data. For example, there are 5 rows log: 1. Update A = 5, 2. Get A = 5, 3. Get A = 6, 4. Update A = 6,  5. Get A = 6; These logs are sorted based on time. Obviously the result obtained in the third row is incorrect, it should return A = 5. The sample data is like: id value time operation 124945912 FALSE 1718280482 get 124945938 FALSE 1718280373 get 124945938 FALSE 1718280373 update 124945938 null 1718280363 get 124945937 FALSE 1718280348 get 124945937 FALSE 1718280348 update 124945937 null 1718280337 get 124945936 FALSE 1718280330 get 124945936 FALSE 1718280330 update   Both id=124945937 and id=124945936 are correct since the obtained value after Update operation is same as Update value(false) even though the previous obtained value(null) which is before Update operation does not equal to Update value. Can ignore the Get operation if there is no Update operation before. Can anyone help? Thanks in advance^^