All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Currently I am trying to optimize my application and I would like to know if it is possible to use TERM() with a datamodel. I have tried the following: | tstats count from datamodel=dm_... See more...
Currently I am trying to optimize my application and I would like to know if it is possible to use TERM() with a datamodel. I have tried the following: | tstats count from datamodel=dm_name where TERM(host=10.10.10.10) by host The search runs but returns no results. If this is possible, can you show me how to do it?
i am searching for these usecases please help me with these File Action Anomaly UseCase and Bad Repudation login usecase in splunk enterprise
We have time-chart visualization on a dashboard. The events are uploaded manually on last day of every month with timestamp of last day of month. Tool tip, when hovered over the line chart visuali... See more...
We have time-chart visualization on a dashboard. The events are uploaded manually on last day of every month with timestamp of last day of month. Tool tip, when hovered over the line chart visualization shows date as first day of the month with format MMM D, YYYY I would like to format the date to show either date that the source has been indexed with or show only Month and Year. Please provide me your suggestions on how this can be achieved.
I'm trying to reproduce a collection of Grafana dashboards we have using Splunk. Generally its working okay, but there's this one area I can't get my head around. I seem to get better resolution on... See more...
I'm trying to reproduce a collection of Grafana dashboards we have using Splunk. Generally its working okay, but there's this one area I can't get my head around. I seem to get better resolution on the charts with prometheus/Grafana than I see with Splunk. In Grafana I have a chart that looks like this: Where the prometheus query is: irate(mattermost_http_errors_total{instance=~"$server"}[1m]) In Splunk my chart looks like this: And the associated Splunk search is: | mstats rate(_value) as count prestats=true WHERE metric_name="mattermost_http_errors_total" AND `mattermost_metrics` sourcetype=prometheus:metric AND (host ="617d67c7fe56") span=60s BY host | timechart rate(_value) as count span=60s BY host | addtotals The general structure/trends seem reasonably close, but Grafana seems to have much finer resolution on the data. I'm scraping the system-under-test in 15s intervals in both cases. Hovering over the datapoints shows me that Grafana is generating a value on the same period as the scraping interval, but Splunk is only generating data on the span value. What I suspect: Its as if Grafana is doing a sliding 1 minute window, evaluated at each point in time that it has data. Where Splunk is taking absolute windows/ span and generating a single result. Is there something I'm missing with how to author the mstats searches? Is there some way to achieve a similar result? Am I correct in what I suspect is happening? Note: My Grafana dashboard also has a series for the "Total" in the same way my Splunk query suggests with addtotals ... I just didn't copy that query here
I have Jenkins App integrated with Splunk which requires token to interact with jenkins server and seems it expires after certain time(month) . So have to generate new token everytime it expires ..... See more...
I have Jenkins App integrated with Splunk which requires token to interact with jenkins server and seems it expires after certain time(month) . So have to generate new token everytime it expires ...So is this expected behavior ? if not how do i fix this .
i have recently upgraded SPlunk from 7.1.1 to 7.3.4 and ES from 5.2.2 to 5.3.1, but after the upgrade i can see that the threat activity dashboard does not show any data (data is only available till ... See more...
i have recently upgraded SPlunk from 7.1.1 to 7.3.4 and ES from 5.2.2 to 5.3.1, but after the upgrade i can see that the threat activity dashboard does not show any data (data is only available till the time splunk was upgraded) i can see that the Threat Intelligence Data Model is enabled and accelerated, but when i see "Data Model Audit" it still shows the date and time of the last time for which the dashboard shows data. Also, i can see that the datamodels Web and network traffic do report data (there are the dependant data models for Threat intelligence data model) Since its on my prod system, i have not yet tried to rebuild the data model, wanted to know if there is any other possible fix for this.
for ex: if i am running the report on 5th of may, i will need the data from 1st of November till 30 apri and i l need to run this report every month for last 6 months , m on 6.x how can i set thi... See more...
for ex: if i am running the report on 5th of may, i will need the data from 1st of November till 30 apri and i l need to run this report every month for last 6 months , m on 6.x how can i set this report so that whenever we run the report for the last months period, it does not include current month..Thank you.
Hi at all, a very easy question that'g giving me crazy: I want to extract data from a lookup in a KV store, I'm able to do this using curl: curl -k -u user:password https://localhost:8089/s... See more...
Hi at all, a very easy question that'g giving me crazy: I want to extract data from a lookup in a KV store, I'm able to do this using curl: curl -k -u user:password https://localhost:8089/servicesNS/nobody/my_app/storage/collections/data/my_lookup I'm able to see conf of the lookup using REST: | rest /servicesNS/nobody/my_app/storage/collections/config/my_lookup but when I try to see data using rest | rest /servicesNS/nobody/my_app/storage/collections/data/my_lookup I have "No results found". Where's my error? Thank you in advance. Ciao. Giuseppe
Hi, I want to create app in my Search head(SH) in distributed environment,but i do not have the deployer setup. i have four SH ,so how to create app in SH without deployer? also i tried to crea... See more...
Hi, I want to create app in my Search head(SH) in distributed environment,but i do not have the deployer setup. i have four SH ,so how to create app in SH without deployer? also i tried to create App in one of the SH but after restarting the app is getting deleted, any idea why is it so? If any other solutions kindly let me know
Splunk documentation about metrics.log is nice but not entirely up to date and complete according to me. In section "Tcpout connections messages", we are missing the aggregation parameter for th... See more...
Splunk documentation about metrics.log is nice but not entirely up to date and complete according to me. In section "Tcpout connections messages", we are missing the aggregation parameter for the values. Is the data reported (like _tcp_KBps) by "name", by "destIp" , by something else ... ? I've tried to figure this out by graphing _tcp_KBps over time and use different aggregation parameters but I have never been able to get a continuous line with a growing value (which should be the case). From time to time another field named "one_time_client=1" appears. What does it mean ? When UseACK is true, you get 2 additional metrics : max_ackq_size and current_ackq_size. They are not documented as far as I know, what do they mean ?
Using third party module such as seccure and Crypto to decrypt the encrypted log message getting error "cannot open shared object file: No such file or directory".However when running code locally ou... See more...
Using third party module such as seccure and Crypto to decrypt the encrypted log message getting error "cannot open shared object file: No such file or directory".However when running code locally outside splunk the code is working fine. Detail error message is given below:- "OSError: Cannot load native module 'Cryptodome.Cipher._raw_ecb': Trying '_raw_ecb.cpython-37m-x86_64-linux-gnu.so'" cannot open shared object file: No such file or directory. Kindly give your suggestions. Thanks, Prince
PII Leaked DNS Generator Anomaly Encrypted C and C Command Anomaly Command and Control Repudation Anomaly File Action Anomaly PowerShell Remote Access SQL Shell Command
Hello, I am working on a dashboard with two text inputs, with tokens "ID_token" and "application_token". I need application_token to depend on the value of ID_token in the following way: If ID_... See more...
Hello, I am working on a dashboard with two text inputs, with tokens "ID_token" and "application_token". I need application_token to depend on the value of ID_token in the following way: If ID_token=* then application_token=app_X Else (ID_token != *) then application_token=* I have written this as: <fieldset submitButton="false"> <input type="text" token="ID_token" searchWhenChanged="true"> <label>ID</label> <change> <condition match="'value'==&quot;*&quot;"> <set token="application_token">app_X</set> </condition> <condition> <set token="application_token">*</set> </condition> </change> <initialValue>*</initialValue> <default>*</default> </input> <input type="text" token="application_token" searchWhenChanged="true"> <label>Application</label> <default>app_X</default> </input> </fieldset> The problem is that when I search for an ID=1234, nothing happens to the application_token. It just keeps having the value "app_X" all the time. Has anyone encountered a similar problem and found a solution?
Hi , Basically their server send logs one line at a time. When it came to Splunk it ingest automatically and not following the line breaker configuration. Out target is to line break the logs be... See more...
Hi , Basically their server send logs one line at a time. When it came to Splunk it ingest automatically and not following the line breaker configuration. Out target is to line break the logs before "C:\Users\localserver>systeminfo". Can Splunk wait for the line breaker to be visible before it linebreak ? Or what is the best way to handle this issue. Example log: C:\Users\localserver>systeminfo Host Name: localserver OS Name: Microsoft Windows 10 Enterprise OS Version: 10.0.18362 N/A Build 18362 OS Manufacturer: Microsoft Corporation OS Configuration: Member Workstation OS Build Type: Multiprocessor Free Registered Owner: company Registered Organization: OOO Original Install Date: 01/01/2020, 7:10:02 PM System Boot Time: 4/28/2020, 12:43:21 PM System Model: HP Samplebook System Type: x64-based PC Processor(s): 1 Processor(s) Installed. Windows Directory: C:\WINDOWS System Directory: C:\WINDOWS\system32 Boot Device: \Device\HarddiskVolume1 C:\Users\localserver> Host Name: localserver OS Name: Microsoft Windows 10 Enterprise OS Version: 10.0.18362 N/A Build 18362 OS Manufacturer: Microsoft Corporation OS Configuration: Member Workstation OS Build Type: Multiprocessor Free Registered Owner: company Registered Organization: OOO Original Install Date: 01/01/2020, 7:10:02 PM System Boot Time: 4/28/2020, 12:43:21 PM System Model: HP Samplebook System Type: x64-based PC Processor(s): 1 Processor(s) Installed. Windows Directory: C:\WINDOWS System Directory: C:\WINDOWS\system32 Boot Device: \Device\HarddiskVolume1 Here's the situation that their server sending the logs, every 1minute it will sent the nextline. C:\Users\localserver>systeminfo <After 1min it will send the next line> Host Name: localserver <After 1min it will send the next line> OS Name: Microsoft Windows 10 Enterprise <After 1min it will send the next line> OS Version: 10.0.18362 N/A Build 18362 <After 1min it will send the next line> OS Manufacturer: Microsoft Corporation <After 1min it will send the next line> OS Configuration: Member Workstation <After 1min it will send the next line> Props.conf [sourcetype_name] DATETIME_CONFIG = CURRENT SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE = (C\:\\Users) TRUNCATE = 8000 If i ingest the log as a bulk it will show the "GREEN BOX" in the picture whole and complete. But in my case it's staggered and ingesting 1line per minute "RED BOX".
Hi, I am trying to get the occurence of two strings for every 3 minute interval.Tried this. index=xyz host="hostname" "rapidViewId=" OR "/user/mention" | timechart span=3m count(eval(match(_... See more...
Hi, I am trying to get the occurence of two strings for every 3 minute interval.Tried this. index=xyz host="hostname" "rapidViewId=" OR "/user/mention" | timechart span=3m count(eval(match(_raw,"rapidViewId="))) AS board, count(eval(match(_raw,"/user/mention"))) AS mention I am getting the result in intended format.But on checking the events for eg:rapidViewId= I can see the events are mix of both(rapidViewId and /user/mention). Any idea what i am doing wrong?.I need individual count in every 3 minutes.
We are trying to use the CEF App, to create a new Output App to be deployed to our two indexers. However during the "Define Output Groups" step of the "New CEF Output" wizard, the green "Create outpu... See more...
We are trying to use the CEF App, to create a new Output App to be deployed to our two indexers. However during the "Define Output Groups" step of the "New CEF Output" wizard, the green "Create output group" button referenced in the documentation, is missing. We are running Splunk Enterprise Version: 7.3.0 Build:657388c7a488
Hi, I am getting below events from my splunk search but how to show them in pie chart. Correlation Id :\"e7b4b14\", msg : Error is:[{\"code\":688,\"message\":\"api failed with error Invalid requ... See more...
Hi, I am getting below events from my splunk search but how to show them in pie chart. Correlation Id :\"e7b4b14\", msg : Error is:[{\"code\":688,\"message\":\"api failed with error Invalid request".\"}]","podName":"test-service","category":"ERROR"} Correlation Id :\"e7b4b14\", msg : Error is:[{\"code\":688,\"message\":\"api failed with error downsteam error".\"}]","podName":"test-service","category":"ERROR"} basically I want to show the String which is there inside Error is array (basically the message part) in the pie chart for today's date
I'm trying to perform a simple GET action with this add-on but I'm not GETing anywhere. This is being reported in the logs. ERROR sendmodalert - Error in 'sendalert' command: Alert script retu... See more...
I'm trying to perform a simple GET action with this add-on but I'm not GETing anywhere. This is being reported in the logs. ERROR sendmodalert - Error in 'sendalert' command: Alert script returned error code 4. Looking through the code this error reports if the CIM app is not installed. Installing the app makes no difference. I'm setting these within the alert Endpoint: http://host.domain.local:5000/command Query string params: cmd=disarm&master_pin=ABCD http method: GET Any suggestions as to what I'm doing wrong?
Hi Team, Is there an alternative to count all the events to 'eventstats' using it results in data loss if exceed the duration only works fine until 6 or 7 hours so. My search is basis 'transact... See more...
Hi Team, Is there an alternative to count all the events to 'eventstats' using it results in data loss if exceed the duration only works fine until 6 or 7 hours so. My search is basis 'transactionStatus = all', it works fine if I specify it as 'Failed' but using fail it doesnt calculate percentage by Total...but only by Total failed events or data count. My BaseSearch.... **transactionStatus=*** earliest=-**1d** | eventstats count as "Total" | rex field=_raw "Error response from Sabre pad for getDetails : ErrorType : BusinessLogic Code : ERR.SWS.HOST.ERROR_IN_RESPONSE, Message\s*:\s*(?.+)" | rex field=vldCxrs "carrierCode=(?..)" | search carrierCode=* | stats count(eval(transactionStatus="FAILED")) as "Failed_Count" by Total, ErrorMsg, tpid, carrierCode | eval Error_Percent=round((Failed_Count/Total)*100,2) | fields tpid, carrierCode, ErrorMsg, Error_Percent, Failed_Count, Total | sort -Failed_Count Regards Nishant
I want to send alerts to the web hook from Splunk app that is generated by resource monitoring for each alert or error raised. Could you please help me with the payload that is generated by configu... See more...
I want to send alerts to the web hook from Splunk app that is generated by resource monitoring for each alert or error raised. Could you please help me with the payload that is generated by configuring the custom webhook notification in Splunk app for Infrastructure. if you can provide me the sample payload that would be great help. Thanks, Karan