All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm trying to forward logs base on index to a third-party system, and at the same time, I still need to retain the logs in Splunk. I've tried adding tcpout in outputs.conf, but it only pushing all lo... See more...
I'm trying to forward logs base on index to a third-party system, and at the same time, I still need to retain the logs in Splunk. I've tried adding tcpout in outputs.conf, but it only pushing all logs to the third-party system, and doesn't store logs into Splunk. Unable to search new log in Splunk. [tcpout] defaultGroup=index1   [tcpout:index1] sendCookedData=false (tried with and without this, both doesn't work) server=1.1.1.1:12468
Hi, I am starting with splunk admin and is confused about one topic. It might be silly. While creating an index, we get the option to set the Searchable Retention (in days), I have read from the do... See more...
Hi, I am starting with splunk admin and is confused about one topic. It might be silly. While creating an index, we get the option to set the Searchable Retention (in days), I have read from the documents that splunk has 4 bucket, hot, warm, cold, and frozen. My question is suppose I have set it as 90 days, while this 90 days period will the data be in hot bucket for the entire 90 days and will roll to frozen after 90 days period is over. Also how different is setting 90 days under the Searchable Retention and setting this below- [main] frozenTimePeriodInSecs = 7,776,000  Please explain. Thanks in advance.
how do we enable Otel gateway logs to flow through to Splunk.   Even when we use the values.yaml settings noted here, we don't see any logs from the gateway:  https://github.com/signalfx/splunk-ot... See more...
how do we enable Otel gateway logs to flow through to Splunk.   Even when we use the values.yaml settings noted here, we don't see any logs from the gateway:  https://github.com/signalfx/splunk-otel-collector-chart/blob/main/examples/collector-gateway-only/collector-gateway-only-values.yaml   We're looking to get the gateway logs to get a better understanding of the health of the gateway 
Hi, on here, I previously had success working with CSS Selectors for Splunk Dashboards with the help of people here. My previous question solved. So please understand I understand CSS Selectors and ... See more...
Hi, on here, I previously had success working with CSS Selectors for Splunk Dashboards with the help of people here. My previous question solved. So please understand I understand CSS Selectors and I've been bashing myself at this for hours. What I have is a standard bar chart of 2 series over time. I am trying to use CSS selector to move the first series' bar position such that it overlaps with the second bar next to it, to give it an appearance the smaller bar is a sub-set of the larger bar. I have attached a photo using Google Inspect on the bar in Splunk Dashboard. You can see the bar for the first series has the class g.highcharts-series.highcharts-series-0 On the right side, you can see I injected a CSS selector into the webpage and no combination of positioning seem to make the series budge, if at all. As of note, I did find this paragraph on Highcharts website - https://www.highcharts.com/docs/chart-design-and-style/style-by-css#what-can-be-styled However, layout and positioning of elements like the title or legend cannot be controlled by CSS. This is a limitation of CSS for SVG, that does not (yet - SVG 2 Geometric Style Properties) allow geometric attributes like x, y, width or height. And even if those were settable, we would still need to compute the layout flow in JavaScript. Instead, positioning is subject to Highcharts JavaScript options like align, verticalAlign etc. Okay so, the bars probably cannot be moved. That is a very unfortunate limitation. Is it possible to make 2 bars overlap each other on a Splunk dashboard at all? I know I can workaround with using math to subtract a series from another and stack the bars, but it is only a workaround.
Hi All, We have index=gems, in the index we have configured gems servers and wms servers and also created one alert. The alert name is CBSIT Alert GEMS NFS stale. So, we want create an alert for ... See more...
Hi All, We have index=gems, in the index we have configured gems servers and wms servers and also created one alert. The alert name is CBSIT Alert GEMS NFS stale. So, we want create an alert for wms servers with the same alert . So, here for us a single alert should contain gems alert name when gems server alert trigger and WMS alert name when WMS server alert trigger. In the index=gems having gems servers 7 and wms servers 7 Ex : Gems server name sclpisgpgemspapp001 WMS server name silpdb5300.ssdc.albert.com We are using below SQL query for CBSIT Alert GEMS NFS stale Alert name : CBSIT Alert GEMS NFS stale   index = "gems" source = "/tmp/unresponsive" sourcetype=cmi:gems_unresponsive | table host _raw| eval timestamp=strftime(now(),"%Y-%m-%d %H:%M:%S") | eval correlation_id=timestamp.":".host | eval assignment_group = "CBS IT - Application Hosting - Unix",impact=3, category="Application",subcategory="Repair/Fix" , contact_type="Event", customer="no573", state=4, urgency=3 , ci=host | eval description = _raw , short_description = "NFS stale on ".host   Can you please help us here.  
The following expression works in regex101: https://regex101.com/r/4D68Ip/1 But not in Splunk. Any help would be appreciated   (?i)nTimeframe\s+\(\w+\)\s+\w+\s+\w+\s+\%\s+\w+\\\w+\\\w+\\\w+\\\w\d+... See more...
The following expression works in regex101: https://regex101.com/r/4D68Ip/1 But not in Splunk. Any help would be appreciated   (?i)nTimeframe\s+\(\w+\)\s+\w+\s+\w+\s+\%\s+\w+\\\w+\\\w+\\\w+\\\w\d+\:\d+\-\d+\:\d+\\\w+\\\w+\\\w+\\\w(?P<Successful>\d+)   We are attempting to extract 58570 from the below string. TEST STING   run.\r\nTimeframe (PT) Success Failed % Failed\r\n\r\n05:15-06:14\r\n\r\n58570\r\n\r\n681\r\n\r\n1.15\r\n\r\nIf you believe you've received this email in error, please see your Splunk"}    
I am looking for help with Splunk configurations that the documentation does not seem to provide and can not be found on Splunk Answers. The problem is selected fields are not persisting between ses... See more...
I am looking for help with Splunk configurations that the documentation does not seem to provide and can not be found on Splunk Answers. The problem is selected fields are not persisting between sessions/alerts. I know this is possible since my old version of Splunk has this ability. Ex. 1. User clicks on drilldown search for Notable Event. User marks Selected Fields to use. 2. User closes tab and reopens the same drilldown search for that Notable Event. 3. Selected Fields are gone and it is back to its default state. How do I get selected fields to save per user?
I have a working script that allows me to retrieve the job ID of a search in Splunk.  This is working in Windows using GNU curl (and is also working --- albeit modified --- in the native Ubuntu Linux... See more...
I have a working script that allows me to retrieve the job ID of a search in Splunk.  This is working in Windows using GNU curl (and is also working --- albeit modified --- in the native Ubuntu Linux version of curl). I am now trying to take this same approach and run it in Windows Powershell --- unfortunately, I have not yet been successful. Here is what I have so far (working curl version is shown first).   curl.exe -k -H "Authorization: Bearer <MYTOKEN>" https://<MYINSTANCE>.splunkcloud.com:8089/services/search/jobs/ --data-urlencode search='<MYSEARCH>' ============ ============ $headers = @{ "Authorization" = "Bearer <MYTOKEN>" } $body = @{ "search" = "<MYSEARCH>" } $response = Invoke-WebRequest -Uri "https://<MYINSTANCE>.splunkcloud.com:8089/services/search/jobs/" ` -Method Post ` -Headers $headers ` -ContentType "application/x-www-form-urlencoded" ` -Body $body `     Any guidance is appreciated. 
I have users.csv as a lookup file with almost 20K users.  I'm writing a query for authentication events for a specific time range for all these users.  CSV file has only one column with the email add... See more...
I have users.csv as a lookup file with almost 20K users.  I'm writing a query for authentication events for a specific time range for all these users.  CSV file has only one column with the email address of each user and the column header is email. 1) Get the user email from the lookup user.csv file 2) pass user email in the search  3) Authentication counts per day for specific time range. I don't have email as a field in the authentication event. . i can get USER-EMAIL in the authentication event using  formula  Index="IndexName"| fields "_time", "eventType", "target{}.alternateId", "target{}.type" | | search "eventType" = "user.authentication.sso" | rename "target{}.alternateId" AS "targetId" | rename "target{}.type" AS "targetType" | eval "Application"=mvindex(targetId, mvfind(targetType, "AppInstance")) | eval "USER-EMAIL"=mvindex(targetId, mvfind(targetType, "AppUser")   authentication event {"actor": {"id": "00u1p2k8w5CVuKgeq4h7", "type": "User", "alternateId": "USER-EMAIL", "displayName": "USER-NAME", "detailEntry": null}, "device": null, "authenticationContext": {"authenticationProvider": null, "credentialProvider": null, "credentialType": null, "issuer": null, "interface": null, "authenticationStep": 0}, "displayMessage": "User single sign on to app", "eventType": "user.authentication.sso", "outcome": {"result": "SUCCESS", "reason": null}, "published": "2024-02-20T22:25:18.552Z", "signOnMode": "OpenID Connect",}, "target": [{"id": "XXXXXXX", "type": "AppInstance", "alternateId": "APPLICATION-NAME": "OpenID Connect Client", "detailEntry": {"signOnModeType": "OPENID_CONNECT"}}, {"id": "YYYYYY", "type": "AppUser", "alternateId": "USER-EMAIL, "displayName": "USER-NAME, "detailEntry": null}]}     Index="indexName" "eventType" = "user.authentication.sso" [|inputlookup "users.csv"] is not working. any help is appreciated. 
Hi All, I'm trying to debug netskope_email_notification.py from the TA-NetSkopeAppForSplunk by running this command. splunk cmd python -m pdb netskope_email_notification.py It runs until it hit... See more...
Hi All, I'm trying to debug netskope_email_notification.py from the TA-NetSkopeAppForSplunk by running this command. splunk cmd python -m pdb netskope_email_notification.py It runs until it hits this line session_key = sys.stdin.readline().strip() How do I get past this?  Maybe something like this, but with a session key. splunk cmd python -m pdb netskope_email_notification.py < session_key If so, how do you create an external session key? TIA, Joe
Hi, Is there a way to regroup similar values without defining tons of regex. Let say I do a search that return urls.  Those urls contains params in the path.    /api/12345/info /api/1234/info ... See more...
Hi, Is there a way to regroup similar values without defining tons of regex. Let say I do a search that return urls.  Those urls contains params in the path.    /api/12345/info /api/1234/info /api/info/124/service /api/info/123/service I know we all see  a pattern there that could fit a regex;)  But remember I don't wan to use it. I live in the hope that there is some magic that can regroup url that are similar Something like :    /api//info /api/info//service
I have string field: provTimes: a=10; b=15; c=10; it basically has semicolon separated sub-fields in the value. Each sub-field has a number on right hand side.  These fields are dynamic, can be a,... See more...
I have string field: provTimes: a=10; b=15; c=10; it basically has semicolon separated sub-fields in the value. Each sub-field has a number on right hand side.  These fields are dynamic, can be a,v,e,f in 1 event and z,y in another event. Ignoring the sub field names, I'm only concerned with the numbers they have. I just want to add them all.  Example   provTimes: a=10; b=15; c=10;   result = 35   provTimes: x=10; b=5; result = 15
I am trying to write a search that will pull the 10 (or so) most recent events for each host. The tail and head commands apparently do not allow any grouping, and I am trying to wrap my head around h... See more...
I am trying to write a search that will pull the 10 (or so) most recent events for each host. The tail and head commands apparently do not allow any grouping, and I am trying to wrap my head around how to do this. I know this does not work, but this is what I am looking for: index=index1 | head 10 by host The closest I can come up with is:  index=index1 | stats values(_raw) by host But that still gives me everything in the time range, not just the last 10 events per host.
  Hi i would like some help to extract each line of data into separate fields of Name, ID, Speed & duplex, state, mac address. critical that "state" is its own field. getting stuck and need help. ... See more...
  Hi i would like some help to extract each line of data into separate fields of Name, ID, Speed & duplex, state, mac address. critical that "state" is its own field. getting stuck and need help. thank you Data below   name                    id    speed/duplex/state            mac address       -------------------------------------------------------------------------------- ethernet1/3             66    1000/full/up                  b6:2c:23:e0:40:42 ethernet1/4             67    1000/full/up                 b6:2c:23:e0:40:43 ethernet1/5             68    10000/full/up                 b6:2c:23:e0:40:44 ethernet1/6             69    10000/full/up                 b6:2c:23:e0:40:45 ethernet1/7             70    10000/full/up                 b6:2c:23:e0:40:46 ethernet1/8             71    10000/full/up                 b6:2c:23:e0:40:47 ae1                     16    [n/a]/[n/a]/up                b6:2c:23:e0:40:10 ae2                     17    [n/a]/[n/a]/up                b6:2c:23:e0:40:11 ha1-a                   5     1000/full/up                  d1:f4:b3:c3:25:97 ha1-b                   7     1000/full/up                  d1:f4:b3:c3:25:96 vlan                    1     [n/a]/[n/a]/up                b6:2c:23:e0:40:01 loopback                3     [n/a]/[n/a]/up                b6:2c:23:e0:40:03 tunnel                  4     [n/a]/[n/a]/up                b6:2c:23:e0:40:04 hsci                    8     40000/full/up                 01:20:6c:1c:81:08    any help will be appreciated. thanks, 
Hi, I have a KV time-based lookup generated from DHCP logs with content like this: time,ip,hostname,mac 1709093697,10.223.5.43,host-43,aa:bb:cc:dd:ee:ff and transforms.conf for it: [dhcp_timebase... See more...
Hi, I have a KV time-based lookup generated from DHCP logs with content like this: time,ip,hostname,mac 1709093697,10.223.5.43,host-43,aa:bb:cc:dd:ee:ff and transforms.conf for it: [dhcp_timebased_lookup] collection = dhcp_timebased_collection external_type = kvstore fields_list = _key,time,ip,hostname,mac max_offset_secs = 691200 min_offset_secs = 0 time_field = time time_format = %s Lookup works well when I run search which pulls events from index: index=test source=timebased | lookup dhcp_timebased_lookup ip AS dest_ip OUTPUT hostname | table _time dest_ip hostname Hostname is there: _time dest_ip hostname 1709093697 10.223.5.43 host-43   But when I use this lookup after non-event-generating commands it does not work: index=test source=timebased | table _time dest_ip | lookup dhcp_timebased_lookup ip AS dest_ip OUTPUT hostname OR index=test source=timebased | stats count BY _time dest_ip | lookup dhcp_timebased_lookup ip AS dest_ip OUTPUT hostname OR | makeresults | eval dest_ip = "10.223.5.43", _time = 1709093697 | lookup dhcp_timebased_lookup ip AS dest_ip OUTPUT hostname OR | tstats from datamodel=SomeDM count BY _time SomeDM.dest_ip span=1s | lookup dhcp_timebased_lookup ip AS "SomeDM.dest_ip" OUTPUT hostname Hostname doesn't show up. If I turn time-based setting for this lookup off it outputs hostnames for all searches above. It makes me think there is some difference between _time field in events' metadata and _time field in statistics. Is it so? And is there solution besides "join with inputlookup and addinfo" workaround?
Which version of Universal Forwarder for ubuntu (debian 64 bit) is compatible with to splunk cloud verison 9.1.2308.203?
Hello community, I am testing the interactions on a pie chart to allow my users to click on a specific segment, and for the rest of the dashboard to adapt accordingly: Before / After clicking on an... See more...
Hello community, I am testing the interactions on a pie chart to allow my users to click on a specific segment, and for the rest of the dashboard to adapt accordingly: Before / After clicking on an element of the pie chart: For this, I use a token on the pie chart that I added in the elements which must be updated accordingly (via a simple "search"): I wanted to add a "Reset" button to reset this filter. However, I'm a little stuck, I don't really know how to configure it. I tried like the pie chart interaction, telling myself that when we click on it, we reset the token but it breaks my dashboard:   While waiting to find a solution, I "cheated" by putting an interaction on the button which reloads the web page to return to the dashboard, which cancels any existing filter but it is not optimal. Do you have any idea how to reset the pie chart token without completely reloading the dashboard page? Best regards, Rajaion  
I have two different queries, one calculates total critical alerts and the second one calculates total time critical alerts where "opened". I need to calculate the average between them time/count, h... See more...
I have two different queries, one calculates total critical alerts and the second one calculates total time critical alerts where "opened". I need to calculate the average between them time/count, how can i achieve it?  
How to show total count values in label of pie chart? instead of percentage example ,I want to show over all count (i.e 501455) next to EOL @developers    
Hi All, I am fetching dashboards using REST Command  | rest /servicesNS/-/-/data/ui/views   Not all the dashboards returned from this command are seen in Splunk UI.  Can anyone help me t... See more...
Hi All, I am fetching dashboards using REST Command  | rest /servicesNS/-/-/data/ui/views   Not all the dashboards returned from this command are seen in Splunk UI.  Can anyone help me to know why is this happening ? Regards, PNV