All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a really simple task but haven't figured out how.  This is a simple table of milestones milestone1 milestone2 milestone3 release 2022-01-30 2022-02-28 2022-03-25 1_0 2022-04-20... See more...
I have a really simple task but haven't figured out how.  This is a simple table of milestones milestone1 milestone2 milestone3 release 2022-01-30 2022-02-28 2022-03-25 1_0 2022-04-20 2022-05-10 2022-05-25 1_1 2022-07-02 2022-07-21 2022-08-14 1_2 2022-09-20 2022-10-14 2022-11-03 1_3 2022-12-21 2023-01-11 2023-01-31 2_0 I need to determine the "release" cycle a given event is in, and perform some calculations in relationship to milestones.  For illustration purposes, let's say if an event is in between milestone1 of 1_1 and 1_2 (2022-04-20 and 2022-07-02), I'll say it belongs in release cycle 1_1.  In other words, milestone2, milestone3, etc., can be considered mere attributes that I need to retrieve. (In the real world, some columns are not dates.) Initially I thought a simple lookup would suffice.  But after various trials, I have made little progress.  If I could devise a macro based on the lookup table to output the value of release, I can certainly then lookup in the table to obtain the rest of attributes.  I even thought of adding a dummy (constant value) column so I could retrieve the entire table with every event.  But even with that, I still couldn't find an easy way to match event with row. The best I have come up with so far is to determine the current release by comparing | inputlookup with now(), like this   | inputlookup release | where now() > strptime(milestone1, "%F") | eventstats max(release) as current_release | where release == current_release   If milestone1 is in epoc time, the search could be simpler but in any case, this only gives now, and I cannot really use it in a macro unless the macro is placed in a subsearch of sorts. (And if the macro is in a subsearch, I cannot pass event time as a parameter, this means that I still don't get to match with events.)
I have following eval based macro to return a string, in the end I am expecting macro to return something like "earliest=08/20/2022:18:39:14 latest=08/20/2022:18:55:14" so that i can use it in searc... See more...
I have following eval based macro to return a string, in the end I am expecting macro to return something like "earliest=08/20/2022:18:39:14 latest=08/20/2022:18:55:14" so that i can use it in search as follows.  index=main org_name="cards-org" app_name="service-prod" `search_range("2022-08-20 19:15:14.104",2)`| table _time msg But I am getting below error.  Please help to understand what is wrong with this and how to achieve this. "Error in 'SearchParser': The definition of macro 'search_range(2)' is expected to be an eval expression that returns a string." Eval based macro definition as follows. | makeresults |eval Date="$daterange$" | eval minutes=$seconds$ | eval formattedEarlyts = strftime((strptime(Date, "%Y-%m-%d %H:%M:%S.%3N") - (minutes * 60)),"%m/%d/%Y:%H:%M:%S") | eval formattedLatestts = strftime((strptime(Date, "%Y-%m-%d %H:%M:%S.%3N") + (minutes * 60)),"%m/%d/%Y:%H:%M:%S") | eval timerange= " earliest="+formattedEarlyts+" "+"latest="+formattedLatestts | fields - Date minutes formattedEarlyts formattedLatestts | eval case (1==1,timerange)
Hi All, I have created Alerts on the basis of Error keyword . Below is one of my alert index=abc ns=blazegateway-c2 CASE(ERROR) NOT "INTERNAL_SERVER_ERROR"|rex field=_raw "(?<!LogLevel=)ERROR(?<E... See more...
Hi All, I have created Alerts on the basis of Error keyword . Below is one of my alert index=abc ns=blazegateway-c2 CASE(ERROR) NOT "INTERNAL_SERVER_ERROR"|rex field=_raw "(?<!LogLevel=)ERROR(?<Error_Message>.*)"|eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S.%3N")| cluster showcount=t t=0.4|table app_name, Error_Message ,cluster_count,_time, environment, pod_name,ns |dedup Error_Message| rename app_name as APP_NAME, _time as Time, environment as Environment, pod_name as Pod_Name, cluster_count as Count On the basis of above query I am getting one of the Error message as shown below: message = ERROR: ld.so: object 'libnss_wrapper.so' from LD_PRELOAD cannot be preloaded: ignored. I want Error message with LD_PRELOAD should not come in Alerts. Can someone guide me what should I change in my alerts      
Given a set of values (e.g. A,B,C) in a multi-value field, I want to get all the combinations that can be generated by this set, i.e.  A-B, A-C, B-C. This is like using itertools combinations in pyth... See more...
Given a set of values (e.g. A,B,C) in a multi-value field, I want to get all the combinations that can be generated by this set, i.e.  A-B, A-C, B-C. This is like using itertools combinations in python, but instead of creating a python custom command, I want to do it natively in splunk.  
Hi Folks,  I'm looking into a blackouts alert that weren't working properly, and the alerts fire for jobs that should be blacked out(ERROR: While inserting Into spm_Delta at line 590 /ERROR DESCRIP... See more...
Hi Folks,  I'm looking into a blackouts alert that weren't working properly, and the alerts fire for jobs that should be blacked out(ERROR: While inserting Into spm_Delta at line 590 /ERROR DESCRIPTION: ORA-0001 deadlock detected while waiting for resources). Is there anybody can help point me where I should start to fix this problem please. Thanks!
I have a business journey that is giving me fits. I have a business transaction that shows up in Analytics for regular searches, but the originating tier does not show up in the list of tiers to cho... See more...
I have a business journey that is giving me fits. I have a business transaction that shows up in Analytics for regular searches, but the originating tier does not show up in the list of tiers to choose from in the Business Journey milestone section. And since the tier is not available as a choice there, the business transaction cannot be selected. I've tried to work around this every way I can think of, but nothing works. The originating tier for the BT is an api gatway tier.  The service I need to gather the metric data from is downstream from the gateway.  So when I create a pojo bt to try to grab the data at the service level, it is masked by the upstream BT from the gatway. Even when I disable the rule on the gateway in an attempt to let the downstream service pojo bt rule detect it, the transactions from the gateway go into the overflow container and are still masking. Why is the originating tier not showing up in Business journeys? Sad and depressed.
I am looking to upgrade our previous app that was https://splunkbase.splunk.com/app/291/ that has long not been updated since 2011. Upgrading to our new Splunk instance I am looking at our apps and t... See more...
I am looking to upgrade our previous app that was https://splunkbase.splunk.com/app/291/ that has long not been updated since 2011. Upgrading to our new Splunk instance I am looking at our apps and trying to see which ones we want to migrate over. Since this app is so old and might not even make it to 9.0 when we move I am looking for another app. Currently I see this one: https://splunkbase.splunk.com/app/5482/#/overview which has been recently updated and looks promising, but it doesn't say that it is compatible with 9.0.  While comparing I found this one: https://splunkbase.splunk.com/app/4183/#/overview that is compatible with 9.0, isn't Splunk supported nor has been updated since 2020.  Is there a better MAXMIND kind of app or is there any advice on these current ones I found? Maybe the update for 9.0 on the second link just hasn't come out yet and it will soon? Need some community opinions and/or possibly creator updates for those apps?  Thank you!
So scenario is a vm that can ping another vm. 1st vm is linux rhel : I installed splunk enterprise and made it the deployment server  also yes I enabled receiving on 9997 2nd vm is a win 10 64 ... See more...
So scenario is a vm that can ping another vm. 1st vm is linux rhel : I installed splunk enterprise and made it the deployment server  also yes I enabled receiving on 9997 2nd vm is a win 10 64 bit: Installed universal forwarder and did the customize option and put in IP of linux box for receiver and for deployment server. when I got to add data I get this error:  There are currently no forwarders configured as deployment clients to this instance. Learn More  I tried making the deployment server also be the deployment client but according to my reading you can't do that is this right? do I need a third box and make it either my deployment server or deployment client?  
Hello, I have a dashboard which has one panel displaying a statistics table.  The table is very wide (~300 numeric columns) which makes manually editing the number precision for each column prohibi... See more...
Hello, I have a dashboard which has one panel displaying a statistics table.  The table is very wide (~300 numeric columns) which makes manually editing the number precision for each column prohibitive in the UI.  I attempted to do this in the source editor but the number precision I input does not translate to the table in the panel after saving. Here is a sample block of code I am inserting:   <format type="number" field="xxxxx_percent"> <option name="precision">0.000</option> <option name="unit">%</option> <option name="unitPosition">after</option> </format>   and the result looks like this: Anyone know why my precision option is not persisting but the other options are? Thank you in advance! -MD
We need to run some scripts from the TA for Unix and Linux but some of them require privileges. Since we arent running Splunk with sudo, each time the script runs returns privileges errors. The s... See more...
We need to run some scripts from the TA for Unix and Linux but some of them require privileges. Since we arent running Splunk with sudo, each time the script runs returns privileges errors. The scripts we are trying to run are: vmstat.sh and nfsiostat.sh We tried configuring this scripts with suid so they run with owner root, but it is not working. Is there anyway we can run this scripts without running Splunk with sudo?
Hello fellow Splunkers! So, I have a series of questions related to comparing data from two different indexes in Splunk. The data, hardware assets, are assigned by groups. However, the assets are l... See more...
Hello fellow Splunkers! So, I have a series of questions related to comparing data from two different indexes in Splunk. The data, hardware assets, are assigned by groups. However, the assets are located in two different indexes and I need to determine how to see which assets are in index 1, index 2, and both. The following are my questions. Due to the nature of the data, I cannot provide a sample nor can I provide specific field names. However, the following table shows the correlation between the data located in both indexes: Index 1 Relation Index 2 SN Equals serial_number MAC Equals ip_mac Asset Equals barcode   Each of the aforementioned should be its own search to match the data in both indexes.  1.) How would you search index 1 to identify which hardware assets are located in index 1 but are not located in index 2?  2.) How would you search index 2 for assets (assets which are assigned by groups) are not in index 1?  3.) How would you search both index 1 and index 2 to determine which assets match in both lists?  Thank you in advance. I know this is a tall order but any possible searches or tips would be much appreciated! -KB 
Where is the schema for DS / code ? thanks
Hello, Sorry for the translation. Currently with the help of the DBconnect APP I am receiving the logs from an aurora database without any problem. My client is telling me that he needs to upgrade... See more...
Hello, Sorry for the translation. Currently with the help of the DBconnect APP I am receiving the logs from an aurora database without any problem. My client is telling me that he needs to upgrade from version 11.6 to 11.15. At the driver level, should I make any adjustments or can I tell them that you can update your database without any problem?   tnx
Does anyone know of a way to get bytes ingested by host and source over a specified time? I know I can use the license_usage.log to get index and sourcetype like  this ...   index="_internal" sou... See more...
Does anyone know of a way to get bytes ingested by host and source over a specified time? I know I can use the license_usage.log to get index and sourcetype like  this ...   index="_internal" source="/opt/splunk/var/log/splunk/license_usage.log" sourcetype="splunkd" type="Usage" | stats sum(b) as bytes by idx | rename idx as index | sort - bytes   or this ...   index="_internal" source="/opt/splunk/var/log/splunk/license_usage.log" sourcetype="splunkd" type="Usage" | stats sum(b) as bytes by st | rename st as sourcetype | sort - bytes   However, you cannot use it reliably for host and source because it squashes the data to prevent too many events. I know that can be tuned in server.conf with squash_threshold but that would be an arbitrary value that could potentially need to keep changing and honestly it's set that way to not overload the system. So, I'm left wondering if anyone knows of a way to get that data without using license_usage.log.
I have spent days working on this, can someone help?   how to populate previous week results? Also there are different license keys for same errors that is why it is showing 2 entries. I have the f... See more...
I have spent days working on this, can someone help?   how to populate previous week results? Also there are different license keys for same errors that is why it is showing 2 entries. I have the following code index=test sourcetype=dhi:testdata ErrorCode!=0 | `DedupDHI` | bucket _time span=1w | lookup table1 LicenseKey OUTPUT CustomerName | eval CustomerName=coalesce(CustomerName,LicenseKey) | stats count as Result by CustomerName,ErrorCode,_time | eventstats sum(Result) as Total by CustomerName | eval PercentOfTotal = round((Result/Total)*100,3) | streamstats current=f latest(Result) as Result_Prev by CustomerName,ErrorCode | eval PercentDifference = round(((Result/Result_Prev)-1)*100,2) | fillnull value="0" | append [ search index=test sourcetype=dhi:testdata ErrorCode!=0 | `DedupDHI` | lookup table1 LicenseKey OUTPUT CustomerName | eval CustomerName=coalesce(CustomerName,LicenseKey) | stats count as Result by CustomerName | eval ErrorCode="Total", PercentOfTotal=100] | fillnull value="0" | lookup table2 ErrorCode OUTPUT Description | lookup table1 LicenseKey OUTPUT CustomerName | eval CustomerName=coalesce(CustomerName,LicenseKey) | eval Error=if(ErrorCode!="Total", ErrorCode+" ("+coalesce(Description,"Description Missing - Update table2")+")", ErrorCode) | rename Result_Prev as "Previous Week Results", PercentDifference as " Percent Difference", PercentOfTotal as "Percent of Total" | fields CustomerName, Error, Result,"Previous Week Results", " Percent Difference" , "Percent of Total" | sort CustomerName, Error, PercentDifference OUTPUT - CustomerName Error Result Previous Week Results  Percent Difference Percent of Total _time customer_1 1002 (Invalid Address State Code. The two digit state code is invalid) 4 0 0 3.361 2022-08-12T00:00:00.000-0500 customer_1 1003 (Invalid Birth Year) 1 0 0 0.84 2022-08-12T00:00:00.000-0500 customer_1 1006 (Invalid UnderwritingState) 1 0 0 0.84 2022-08-12T00:00:00.000-0500 customer_1 1013 (Invalid Drivers License Format) 12 0 0 10.084 2022-08-12T00:00:00.000-0500 customer_1 1013 (Invalid Drivers License Format) 1 12 -91.67 0.84 2022-08-19T00:00:00.000-0500 customer_1 1023 (Invalid Name) 3 0 0 2.521 2022-08-12T00:00:00.000-0500 customer_1 1027 (Invalid UnderwritingState) 87 0 0 73.109 2022-08-12T00:00:00.000-0500 customer_1 1027 (Invalid UnderwritingState) 1 87 -98.85 0.84 2022-08-19T00:00:00.000-0500 customer_1 1305 (Unable to connect to data provider) 9 0 0 7.563 2022-08-12T00:00:00.000-0500 customer_1 Total 119 0 0 100 1969-12-31T18:00:00.000-0500 customer_2 1023 (Invalid Name) 16 0 0 55.172 2022-08-12T00:00:00.000-0500 customer_2 1201 (Lookback Date Not Set / Offset = 0) 1 0 0 3.448 2022-08-12T00:00:00.000-0500 customer_2 1305 (Unable to connect to data provider) 11 0 0 37.931 2022-08-12T00:00:00.000-0500 customer_2 1305 (Unable to connect to data provider) 1 11 -90.91 3.448 2022-08-19T00:00:00.000-0500 customer_2 Total 29 0 0 100 1969-12-31T18:00:00.000-0500 customer_3 1023 (Invalid Name) 3 0 0 20 2022-08-12T00:00:00.000-0500 customer_3 1027 (Invalid UnderwritingState) 11 0 0 73.333 2022-08-12T00:00:00.000-0500 customer_3 9999 (Timeout expired (9999)) 1 0 0 6.667 2022-08-12T00:00:00.000-0500 customer_3 Total 15 0 0 100 1969-12-31T18:00:00.000-0500 customer_4 1003 (Invalid Birth Year) 1 0 0 3.846 2022-08-12T00:00:00.000-0500 customer_4 1013 (Invalid Drivers License Format) 5 0 0 19.231 2022-08-12T00:00:00.000-0500 customer_4 1013 (Invalid Drivers License Format) 1 5 -80 3.846 2022-08-19T00:00:00.000-0500 customer_4 1023 (Invalid Name) 14 0 0 53.846 2022-08-12T00:00:00.000-0500 customer_4 1026 (Drivers License Number is a required field) 3 0 0 11.538 2022-08-12T00:00:00.000-0500 customer_4 9999 (Timeout expired (9999)) 1 0 0 3.846 2022-08-12T00:00:00.000-0500 customer_4 9999 (Timeout expired (9999)) 1 1 0 3.846 2022-08-19T00:00:00.000-0500 customer_4 Total 26 0 0 100 1969-12-31T18:00:00.000-0500
Hi everyone,   State ID APP _time INFO ABC Car 19/08/22 19:51 INFO ABC Car 19/08/22 19:52 INFO DEF Car 20/08/22 19:53 INFO ZZZ Book 30/... See more...
Hi everyone,   State ID APP _time INFO ABC Car 19/08/22 19:51 INFO ABC Car 19/08/22 19:52 INFO DEF Car 20/08/22 19:53 INFO ZZZ Book 30/08/22 19:51 INFO ZZZ Book 19/08/22 19:55 WARN ABC Car 19/08/22 19:56 WARN XYZ Car 20/08/22 19:51 WARN ZZZ Book 19/08/22 19:58 WARN ZZZ Book 19/08/22 19:59 ERROR ABC Car 19/08/22 20:00 ERROR ABC Car 19/08/22 20:01 ERROR XYZA Car 30/08/22 19:51   I have following data as mentioned in table above, and i have to create a statistical analysis for following requirement Find out count of distinct ID By APP for any given STATE   Ex.:  For State=Info, My Results should be: APP Count Car 2 Book 1   For State=ERROR, My Results should be: APP Count Car 2   Currently i am trying like this:       index=testdata | stats count(eval(searchmatch("*INFO*"))) BY APP         But i am Not getting count of  records with Distinct ID.    My Question is: How to use stats command with eval function and distinct function on two separate columns.
We are in SplunkCloud with ES 7.0.0 As a user with the sc_admin or ess_admin role when selecting an incident to edit, the drop-down for "Status" gives no matches. All other drop-downs give options ... See more...
We are in SplunkCloud with ES 7.0.0 As a user with the sc_admin or ess_admin role when selecting an incident to edit, the drop-down for "Status" gives no matches. All other drop-downs give options as expected. We've tried enable/disable all statuses, creating new statuses, adding/removing transitions roles for ALL statuses, granting permissions to edit_reviewstatus for additional roles, granting write permissions to kvstore reviewstatuses_lookup, and several other things. Is there a key thing we are missing to be able to change status on incidents with the ess_admin user?
reated splunk python script and set splunk web on "data input" and added all procedures but my script is not running in splunk web and i installed python splunk sdk on windows using this command pi... See more...
reated splunk python script and set splunk web on "data input" and added all procedures but my script is not running in splunk web and i installed python splunk sdk on windows using this command pip install splunk-sdk I've run my code in this folder and verified that it works C:\Program Files\Splunk\etc\apps\search\bin\python sample.py but it doesn't work in Splunk Web. How to solve this problem on Windows? Do I need to change any in the Splunk folder path? C:\Program Files\Splunk\etc\apps\search\bin\sample.py any solution solve this problem in splunk windows?
Hi, We are trying to integrate gmail logs into our Splunk Cloud instance.  We have tried the 'Splunk Addon for Google Workspace(https://splunkbase.splunk.com/app/5556/)'. The integration was smooth... See more...
Hi, We are trying to integrate gmail logs into our Splunk Cloud instance.  We have tried the 'Splunk Addon for Google Workspace(https://splunkbase.splunk.com/app/5556/)'. The integration was smooth, and we were able to see gsuite header logs in Splunk. But the problem in this case was it eventually generated large bills from Google for the bigqueries. Hence we were forced to disable it temporarily. When we did an analysis, we found that the current approach in this addon is to query all partition at once using the below query: "SELECT * FROM `{gcp_project_id}.gmail_logs_dataset.daily_*` "                     "WHERE event_info.timestamp_usec > {start_time_usec} "                     "AND event_info.timestamp_usec < {end_time_usec} "                     "ORDER BY event_info.timestamp_usec ASC" Instead of querying the whole partition, we would like to query the table of each day, it would massively reduce the cost.  I did raise a support ticket with Splunk on this, and they have confirmed it requires a code change and they cannot assure any timeline for this. Even though we tried to manually edit this part of code and upload it via custom app, it didnt succeed the vetting process.  It would be really helpful if someone could provide me an alternate solution for integrating gmail logs or a way to upload the modified addon. Much appreciated, Archa
I would like to have six intermediate forwarders before indexers.Also i am interested to configure prasing on intermediate forwarders only.can some help me how to configuration. I have done the bas... See more...
I would like to have six intermediate forwarders before indexers.Also i am interested to configure prasing on intermediate forwarders only.can some help me how to configuration. I have done the basic configuration where i am facing parsing quees and tail reader error on IF and traffic is getting blocked. can you please help me solve this problem