All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi! Faced with writing a query with an additional check and I can't find a way out. I will be glad if you tell me the direction or help with advice. We have the following custom logic: 1. When use... See more...
Hi! Faced with writing a query with an additional check and I can't find a way out. I will be glad if you tell me the direction or help with advice. We have the following custom logic: 1. When user do some action(it is not important) we generate an event in index=custom with the following fields: evt_id: 1,  user_id: 555 (example) 2. The user should confirm that he is doing this "some action" in third-party app, and this app generate to the index=custom the next event: evt_id: 2, user_id:555 (example) msg:confirmed 3. If user NOT CONFIRMED the SOME ACTION from step 1 - we need to generate alert. It means, that Splunk didn't receive evt_id:2 in index=custom  The alert logic is following: We need to alert when  evt_id: 1 was more than 5 minutes ago(the time that the user has to confirm "some action') and when NO evt_id: 2 with the same user_id by the time the alert starts working.  I understood that I need to do the first search like(example): index=custom evt_id=1 earliest=-5m latest=-7m But I have no idea how to implement additional condition with evt_id:2. if we didn't have the user_id field, then I could use stats  count command but I need  to correlate both events(1 and 2) with the field user_id.  Thanks for you help, have a nice day.
Splunk PS installed UBA a while back, and I just noticed that we are not getting OS logs from those servers into Splunk Enterprise.  Since we have a 10 node cluster, I was trying to find a quicker wa... See more...
Splunk PS installed UBA a while back, and I just noticed that we are not getting OS logs from those servers into Splunk Enterprise.  Since we have a 10 node cluster, I was trying to find a quicker way to manage them.  Is there a reason I shouldn't connect the Splunk Enterprise running on all of those nodes to the deployment server?
Hello, community, I wanted to share a challenge that I have mapping fields to Data Models.  The issue is that I have identified/created fields that are required for a Deta Set, but they are not aut... See more...
Hello, community, I wanted to share a challenge that I have mapping fields to Data Models.  The issue is that I have identified/created fields that are required for a Deta Set, but they are not auto-populating e.g. cannot be seen by the Data Model/Set. Any suggestions of where I might be getting wrong? Regards, Dan 
Is it possible to display textual (string) values instead of numbers on the Y axis? I have a time series with a field called "state", which contains an integer number. Each number represents a cer... See more...
Is it possible to display textual (string) values instead of numbers on the Y axis? I have a time series with a field called "state", which contains an integer number. Each number represents a certain state. Examples: 0="off", 1="on" 0="off", 1="degraded", 2="standby", 3="normal", 4="boost" Now I would like to have a line or bar chart showing the respective words on the Y axis ticks instead of 0, 1, 2, 3, 4. Note: This was already asked but not answered satisfactorily: https://community.splunk.com/t5/Splunk-Search/Is-it-possible-to-make-y-axis-labels-display-quot-on-quot-and/m-p/222217 
Hi Everyone!. I'm here to share the resolution for one of the frequent errors that we see in internal logs and sourcetype=splunkd. If you happen to encounter the below error, "Failed processing... See more...
Hi Everyone!. I'm here to share the resolution for one of the frequent errors that we see in internal logs and sourcetype=splunkd. If you happen to encounter the below error, "Failed processing http input , token name=token_name,parsing_err="Incorrect index", index=index_name" Please make sure that your index name is being added to the respective token(HEC). In order to avoid this error, make sure your index is added under the respective token as soon as a new index is created. [https://token name] disabled = 0 index=default_index name indexes=index1,index2, index3;[add your index here]   Cheers      
Hi i have the below data  _time SQL_ID NEWCPUTIME 2023-10-25T12:02:10.140+01:00 ABCD 155.42 2023-10-25T11:57:10.140+01:00 ABCD 146.76 2023-10-25T11:47:10.156+01:... See more...
Hi i have the below data  _time SQL_ID NEWCPUTIME 2023-10-25T12:02:10.140+01:00 ABCD 155.42 2023-10-25T11:57:10.140+01:00 ABCD 146.76 2023-10-25T11:47:10.156+01:00 ABCD 129.34 2023-10-25T11:42:10.163+01:00 ABCD 118.84 2023-10-25T12:07:10.070+01:00 ABCD 163.27 2023-10-25T11:52:10.150+01:00 ABCD 139.34   EXPECTED OUTPUT is   output       _time SQL_ID NEWCPUTIME delta 2023-10-25T12:07:10.070+01:00 ABCD 163.27 7.85 2023-10-25T12:02:10.140+01:00 ABCD 155.42 8.66 2023-10-25T11:57:10.140+01:00 ABCD 146.76 7.42 2023-10-25T11:52:10.150+01:00 ABCD 139.34 10 2023-10-25T11:47:10.156+01:00 ABCD 129.34 10.5 2023-10-25T11:42:10.163+01:00 ABCD 118.84 118.84   SPLUNK  output  which is not correct output       _time SQL_ID NEWCPUTIME delta 2023-10-25T12:07:10.070+01:00 ABCD 163.27   2023-10-25T12:02:10.140+01:00 ABCD 155.42 7.85 2023-10-25T11:57:10.140+01:00 ABCD 146.76 8.66 2023-10-25T11:52:10.150+01:00 ABCD 139.34 7.42 2023-10-25T11:47:10.156+01:00 ABCD 129.34 10 2023-10-25T11:42:10.163+01:00 ABCD 118.84 10.5   im using the below query  index=data sourcetype=dataset source="/usr2/data/data_STATISTICS.txt" SQL_ID= ABCD |streamstats current=f window=1 global=f last(NEWCPUTIME) as last_field by SQL_ID |eval NEW_CPU_VALUE =abs(last_field - NEWCPUTIME) |table _time,SQL_ID, last_field,NEWCPUTIME,NEW_CPU_VALUE   i tried using delta command as well however im not getting the expected output as well 
How we can measures number of spool in SAP systems using AppDynamics
Hi, I'd like to know how to associate the "url" tag with the web data model. We're currently working with URL logs in our Splunk ES, but we're encountering difficulties in viewing the data model whe... See more...
Hi, I'd like to know how to associate the "url" tag with the web data model. We're currently working with URL logs in our Splunk ES, but we're encountering difficulties in viewing the data model when conducting searches. Could someone kindly provide guidance on this matter? Thanks  
Hi, I aimed to merge the "dropped" and "blocked" values under the "IDS_Attacks.action" field in the output of the datamodel search and include their respective counts within the newly created "block... See more...
Hi, I aimed to merge the "dropped" and "blocked" values under the "IDS_Attacks.action" field in the output of the datamodel search and include their respective counts within the newly created "blocked" field. so that I can add it to the dashboard. output:   IDS_Attacks.action count allowed 130016 blocked 595 dropped 1123
Hi, Not sure how to fix continius bar between login and logout. As you can see on picture it's marked as login, lot of spaces and then logout. The best would be everything is color marked from login... See more...
Hi, Not sure how to fix continius bar between login and logout. As you can see on picture it's marked as login, lot of spaces and then logout. The best would be everything is color marked from login until logout. Though it could be done throug format, but not this time.  Hope someone can help me with it Rgds
I have a response from one of the client application like this: { "employees": { "2023-03-16": { "1": { "id": 1, "name": "Michael Scott", "email": "demo@desktime.com", "groupId": 1, "group": "Accoun... See more...
I have a response from one of the client application like this: { "employees": { "2023-03-16": { "1": { "id": 1, "name": "Michael Scott", "email": "demo@desktime.com", "groupId": 1, "group": "Accounting", "profileUrl": "url.com", "isOnline": false, "arrived": false, "left": false, "late": false, "onlineTime": 0, "offlineTime": 0, "desktimeTime": 0, "atWorkTime": 0, "afterWorkTime": 0, "beforeWorkTime": 0, "productiveTime": 0, "productivity": 0, "efficiency": 0, "work_starts": "23:59:59", "work_ends": "00:00:00", "notes": { "Skype": "Find.me", "Slack": "MichielS" }, "activeProject": [] }, "2": { "id": 2, "name": "Andy Bernard", "email": "demo3@desktime.com", "groupId": 106345, "group": "Marketing", "profileUrl": "url.com", "isOnline": true, "arrived": "2023-03-16 09:17:00", "left": "2023-03-16 10:58:00", "late": true, "onlineTime": 6027, "offlineTime": 0, "desktimeTime": 6027, "atWorkTime": 6060, "afterWorkTime": 0, "beforeWorkTime": 0, "productiveTime": 4213, "productivity": 69.9, "efficiency": 14.75, "work_starts": "09:00:00", "work_ends": "18:00:00", "notes": { "Background": "Law and accounting" }, "activeProject": { "project_id": 67973, "project_title": "Blue Book", "task_id": 42282, "task_title": "Blue Book task", "duration": 6027 } }..... } "__request_time": "1678957028" }  I am facing problem with the date field "2023-03-16" as this field changes everyday. I wanted to create statistics based on all Employee IDs, Late employees, Email etc for last 7 days. I have used Spath  but cannot use wildcard search on all Late employees on all days. Thanks
I am getting the error: (502) Insufficient Privileges: You do not have View privilege on Course I am enrolled for the splunk Power user training and i cannot access my learning path because of the e... See more...
I am getting the error: (502) Insufficient Privileges: You do not have View privilege on Course I am enrolled for the splunk Power user training and i cannot access my learning path because of the error.
Can you suggest on this if we remove the 2022 files so will be any impact on splunk </opt/app/splunk/var/lib/splunk/os/db>ls -lrt total 644 -rw------- 1 splunk splunk  10 Jan 18 2022 CreationT... See more...
Can you suggest on this if we remove the 2022 files so will be any impact on splunk </opt/app/splunk/var/lib/splunk/os/db>ls -lrt total 644 -rw------- 1 splunk splunk  10 Jan 18 2022 CreationTime drwx--x--- 2 splunk splunk 4096 Jan 18 2022 GlobalMetaData drwx--x--- 3 splunk splunk 4096 Jan 18 2022 db_1642559010_1641112260_0 drwx--x--- 3 splunk splunk 4096 Feb 26 2022 db_1645905109_1644968889_4 drwx--x--- 3 splunk splunk 4096 Feb 26 2022 db_1625407961_1565097054_1 drwx--x--- 3 splunk splunk 4096 Feb 26 2022 db_1564424430_1323199008_2 drwx--x--- 3 splunk splunk 4096 Feb 26 2022 db_1645912526_1645346582_5 drwx--x--- 3 splunk splunk 4096 Feb 26 2022 db_1644968878_1642559018_3 drwx--x--- 3 splunk splunk 4096 Feb 26 2022 db_1645931413_1641472459_8 drwx--x--- 3 splunk splunk 4096 Feb 27 2022 db_1646022282_1645905131_11 drwx--x--- 3 splunk splunk 4096 Feb 28 2022 db_1646061049_1646022278_12 drwx--x--- 3 splunk splunk 4096 Mar 31 2022 db_1648760328_1646061038_13 drwx--x--- 3 splunk splunk 4096 May 1 2022 db_1651428760_1648760301_14 drwx--x--- 3 splunk splunk 4096 Jun 1 2022 db_1654064390_1651428766_16 drwx--x--- 3 splunk splunk 4096 Jul 1 2022 db_1656658688_1654064392_17 drwx--x--- 3 splunk splunk 4096 Jul 30 2022 db_1659238089_1656658690_18 drwx--x--- 3 splunk splunk 4096 Aug 6 2022 db_1625407961_1569499319_9 drwx--x--- 3 splunk splunk 4096 Aug 6 2022 db_1625407908_1587017816_6 drwx--x--- 3 splunk splunk 4096 Aug 6 2022 db_1568123891_1361996942_7 drwx--x--- 3 splunk splunk 4096 Aug 6 2022 db_1566397752_1323199008_10 drwx--x--- 3 splunk splunk 4096 Aug 6 2022 db_1659536784_1659238115_19 drwx--x--- 3 splunk splunk 4096 Aug 6 2022 db_1590756532_1590756532_15 drwx--x--- 3 splunk splunk 4096 Sep 12 2022 db_1662507027_1659807171_20 drwx--x--- 3 splunk splunk 4096 Sep 19 2022 db_1663592993_1662507051_21 drwx--x--- 3 splunk splunk 4096 Sep 19 2022 db_1663597969_1663592971_24 drwx--x--- 3 splunk splunk 4096 Sep 19 2022 db_1663600052_1663597937_25 drwx--x--- 3 splunk splunk 4096 Oct 20 2022 db_1666239485_1663600060_26 drwx--x--- 3 splunk splunk 4096 Nov 15 2022 db_1668525038_1666239467_27 drwx--x--- 3 splunk splunk 4096 Nov 15 2022 db_1668525264_1668525013_29 drwx--x--- 3 splunk splunk 4096 Dec 13 2022 db_1660748402_1645073785_31 drwx--x--- 3 splunk splunk 4096 Dec 15 2022 db_1671120985_1668526212_32
Does anyone know where I can find information on installing and configuring ESET TA and the app Linux Splunk enterprise (Debian) and Windows Eset Administrator ? I don't have any information on in... See more...
Does anyone know where I can find information on installing and configuring ESET TA and the app Linux Splunk enterprise (Debian) and Windows Eset Administrator ? I don't have any information on installing on newer versions compatible with 9.0.5 Splunk Enterprise. Despite having configured according to the logs and syslog eset, I do not see any logs arriving on my search head. https://help.eset.com/protect_admin/90/en-US/admin_server_settings_syslog.html https://splunkbase.splunk.com/app/3931/ https://splunkbase.splunk.com/app/3867/#/details Or https://splunkbase.splunk.com/app/6808
Each time I run a search query and click visualisation, the default is "column chart". How do I set this to default to "line chart" for myself, and how do I set this for other users? Thanks in adva... See more...
Each time I run a search query and click visualisation, the default is "column chart". How do I set this to default to "line chart" for myself, and how do I set this for other users? Thanks in advance
CAT to Splunk Logs Failing: host = 161.209.202.108 user = sv_cat port = 22 Start time: 10/24/2023 at 4:21am 
Hello As far I understand, the Splunk datamodel has two main goals 1)  Data models enable users of Pivot to create compelling reports and dashboards without designing the searches that generate the... See more...
Hello As far I understand, the Splunk datamodel has two main goals 1)  Data models enable users of Pivot to create compelling reports and dashboards without designing the searches that generate them.  So, the Pivot tool lets to report on a specific data set without the Splunk Search Processing Language  2) It's possible to refer to the CIM data models to normalize different name of data having the same function In this case, we need to normalize data by using tags, alias, eventtypes, etc... Alerts Application State Authentication Certificates Databases Data Loss Prevention Email Interprocess Messaging Intrusion Detection Inventory Java Virtual Machines Malware Network Resolution (DNS) Network Sessions Network Traffic Performance Ticket Management Updates Vulnerabilities Web Is it correct? Thanks
This isn't a question, rather just a place to drop a PDF I put together that I titled "Bare Bones Splunk"   I've seen a lot of people try and get started with Splunk, but then get stuck right after... See more...
This isn't a question, rather just a place to drop a PDF I put together that I titled "Bare Bones Splunk"   I've seen a lot of people try and get started with Splunk, but then get stuck right after getting Splunk Enterprise installed on their local machine. It can be daunting to log into Splunk for the first time and know what the heck you should do.  A person can get through the install to the What Happens Next page, and be pretty overwhelmed with what to do next: Learn SPL and search?  What should they search?  How should they start getting their data in?  What sort of data should I start getting in?  What dashboard should I build? They've started...but need that ah-ha example to see how this tool will fit into their existing environment and workflow. The attached Bare_Bones_Splunk.pdf file guides the reader from the point of install to using the data already being indexed in index=_internal to replicate a few common use cases of Splunk: Monitor a web server Monitor an application server Monitor security incidents The examples are really simple, and the resulting dashboard created in the tutorial is a poor example of something your boss might want (or not...how observant is your boss - do they just want a few graphs with nice colors?).  But, this will give someone a really quick intro to Splunk without having to do anything other than install (and then maybe they will be ready to tackle a broader introduction, like the Search Tutorial)
I have a user that requested me to look into some of his reports. He wanted the permission of report 2 to match with report 1. Both are owned by two different people, but two people with similar role... See more...
I have a user that requested me to look into some of his reports. He wanted the permission of report 2 to match with report 1. Both are owned by two different people, but two people with similar roles and access.   After we tweaked the settings for the report, being shared in the app, having read access by all, and write permissions to those with the appropriate roles, they are still having issues viewing and editing.    The owner of report 1 is the owner/creator of the report. The report runs as owner, and is shared globally. He doesn't have permissions to edit the actual alert.  He created the report initially, how come he cant edit it. I even cloned it and reassigned ownership, to no avail.  Report 1  runs as owner, while report 2 has the option to run as owner or as the user. How come one report has that option while the other one is locked to running as owner? As far as user two goes, his roles include permissions to the used indexes, as well as access to the app, default search app, and he has even more roles and permissions than user 1. Yet, he receives an error when trying to view the link that splunk sends out that has the attached report.  My question is, is there anywhere else I should be looking at in order to find permission discrepancies. From everything ive seen, both users have access to the required indexes, have pretty much soft-admin on splunk, and i assume they have viewed these in the past. From roles to users to capabilities, they have everything in order, or at least it seems. Is there something I should check in the configs?    Thanks for any guidance. 
I often run into a case where I find I need to take the same dataset and compute aggregate statistics on different group-by sets, for instance if you want the output of this:     index=example | st... See more...
I often run into a case where I find I need to take the same dataset and compute aggregate statistics on different group-by sets, for instance if you want the output of this:     index=example | stats avg(field1) by x,y,z | append [ index=example | stats perc95(field2) by a,b,c ]   I am using the case n=2 groupbys for convenience. In the general case there are N groupbys, and arbitrary stats functions... what is the best way to optimize this kind of query, without using append (which runs into subsearch limits)? Some of the patterns I can think of are below. One way is to use appendpipe.    index=example | appendpipe [ | stats avg(field1) by x,y,z ] | appendpipe [ | stats perc95(field2) by a,b,c ]   Unfortunately this seems kind of slow, especially once you start having to add more subsearches and preserving and passing  a large number of non-transformed events throughout the search. Another way is to use eventstats to preserve the events data, finishing it off with a final stats.   index=example | eventstats avg(field1) as avg_field1 by x,y,z | stats first(avg_field1) as avg_field1, perc95(field2) by a,b,c   Unfortunately this is not much faster. I think there is another way using streamstats in place of eventstats, but I still haven't figured out how to retrieve the last event without just invoking eventstats last() or relying on an expensive sort.   Another way I've tried is intentionally duplicating your data using mvexpand which has the best performance by far.    index=example ```Duplicate all the data``` | eval key="1,2" | makemv delim="," key | mvexpand key ```Set groupby = concatenation of groupby field values``` | eval groupby=case(key=1,x.",".y.",".z, key2=a.",".b.",".c, true(), null()) | stats avg(field1), perc95(field2) by groupby   Are there any other patterns that are easier/faster? I'm curious as to how Splunk processes things under the hood, I know something called "map-reduce" is part of it but would be curious to know if anyone knows how to optimize this computation and why it's optimal in a theoretical sense.