All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

DATETIME_CONFIG = CURRENT != NONE  Current set the timestamp from the Aggregation queue time.  None in this instance sets the timestamp from the time handed over to Splunk by the modular input sc... See more...
DATETIME_CONFIG = CURRENT != NONE  Current set the timestamp from the Aggregation queue time.  None in this instance sets the timestamp from the time handed over to Splunk by the modular input script.  Splunk then still needs to send the data to an indexer, which is where the _indextime will be set. Yes the data is cooked and time set on the HeavyForwarder but note the _indextime is NOT.  An easy way to see this in action is to look at any of your data being ingested by DBConnect with _time being set to Current. The _indextime will usually be negative,  but every once in a while you'll see it jump to a few seconds usually due to a blocked output queue. And of course any difference between the HeavyForwarder and Indexer time will of course cause times to be off as well. 
Are you sure this is your literal search? Because you cannot pipe to a tstats command unless it's with prestats=t append=t. Also, what does your security_content_summariesonly macro expand to? Also... See more...
Are you sure this is your literal search? Because you cannot pipe to a tstats command unless it's with prestats=t append=t. Also, what does your security_content_summariesonly macro expand to? Also also - you're appending two "full" searches. Are you sure you're not hitting subsearch limits? And back to the point - that's what job details and job log are for - see the timeline, see where Splunk spends its time. Check the scanned results vs. returned results...
Currently, we receive a single email alert via Notable Event Aggregation Policies (NEAP) whenever our ITSI services transition from normal to high or critical. However, we need an automated process t... See more...
Currently, we receive a single email alert via Notable Event Aggregation Policies (NEAP) whenever our ITSI services transition from normal to high or critical. However, we need an automated process that sends recurring email alerts every 5 minutes if the service remains degraded and hasn't reverted back to normal. From my research, many forums and documentation suggest achieving this through Correlation Searches. However, since we rely on KPI alerting, and none of our Correlation Searches (even the out-of-the-box ones) seem to function properly, this approach hasn't worked for us... Given the critical nature of the services we monitor, we’re seeking guidance on setting up recurring alerts using NEAPs or any other reliable method within Splunk ITSI. Any assistance or insights on how to configure this would be greatly appreciated.
Hi everyone! I am working on building a dashboard which captures all the firewall, Web proxy, EDR, WAF, Email, DLP blocked for last 6 months in a table format which should look like this -  I... See more...
Hi everyone! I am working on building a dashboard which captures all the firewall, Web proxy, EDR, WAF, Email, DLP blocked for last 6 months in a table format which should look like this -  I am able to write the query which will give me the count for each parameter and then I append all the single query into one which is making the final query run slower and taking forever to complete. Here is the final query- | tstats `security_content_summariesonly` count as Blocked from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") AND All_Traffic.action="blocked" earliest=-6mon@mon latest=now by _time | eval Source="Firewall" | tstats `security_content_summariesonly` count as Blocked from datamodel=Web where sourcetype IN ("alertlogic:waf","aemcdn","aws:*","azure:firewall:*") AND Web.action="block" earliest=-6mon latest=now by _time | eval Source="WAF" | append [search index=zscaler* action=blocked sourcetype="zscalernss-web" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count as Blocked by _time | eval Source="Web Proxy"] | append [| tstats summariesonly=false dc(Message_Log.msg.header.message-id) as Blocked from datamodel=pps_ondemand where (Message_Log.filter.routeDirection="inbound") AND (Message_Log.filter.disposition="discard" OR Message_Log.filter.disposition="reject" OR Message_Log.filter.quarantine.folder="Spam*") earliest=-6mon@mon latest=now by _time | eval Source="Email"] | append [search index=crowdstrike-hc sourcetype="CrowdStrike:Event:Streams:JSON" "metadata.eventType"=DetectionSummaryEvent metadata.customerIDString=* earliest=-6mon@mon latest=now | bin _time span=1mon | transaction "event.DetectId" | search action=blocked NOT action=allowed | stats dc(event.DetectId) as Blocked by _time | eval Source="EDR"] | append [search index=forcepoint_dlp sourcetype IN ("forcepoint:dlp","forcepoint:dlp:csv") action="blocked" earliest=-6mon@mon latest=now | bin _time span=1mon | stats count(action) as Blocked by _time | eval Source="DLP"] | eval MonthNum=strftime(_time, "%Y-%m"), MonthName=strftime(_time, "%b") | stats sum(Blocked) as Blocked by Source MonthNum MonthName | xyseries Source MonthName Blocked | addinfo | table Source [| makeresults count=7 | streamstats count as month_offset | eval start_epoch=relative_time(now(),"-6mon@mon"), end_epoch=now() | eval start_month=strftime(start_epoch, "%Y-%m-01") | eval month_epoch = relative_time(strptime(start_month, "%Y-%m-%d"), "+" . (month_offset-1) . "mon") | where month_epoch <= end_epoch | eval month=strftime(month_epoch, "%b") | stats list(month) as search ]   I figured out the issue is with the firewall query- | tstats `security_content_summariesonly` count as Blocked from datamodel=Network_Traffic where sourcetype IN ("cp_log", "cisco:asa", "pan:traffic") AND All_Traffic.action="blocked" earliest=-6mon@mon latest=now by _time | eval Source="Firewall" Can someone guide me how to fix this issue. I have been stuck in this issue from 2 weeks
You can try the `showSplitSeries` option in the the dashboard studio to show each line/series as it's own chart. See this runanywhere sample dashboard:  { "title": "Test_Dynamic_Charting", "... See more...
You can try the `showSplitSeries` option in the the dashboard studio to show each line/series as it's own chart. See this runanywhere sample dashboard:  { "title": "Test_Dynamic_Charting", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_5sPrf0wX": { "dataSources": { "primary": "ds_jq4P4CeS" }, "options": { "showIndependentYRanges": true, "showSplitSeries": true, "yAxisMajorTickSize": 4 }, "type": "splunk.line" } }, "dataSources": { "ds_jq4P4CeS": { "name": "Search_1", "options": { "query": "index=_internal \n| eval sourcetype=sourcetype.\"##\".log_level\n| timechart count by sourcetype" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_5sPrf0wX", "position": { "h": 960, "w": 1240, "x": 0, "y": 0 }, "type": "block" } ], "type": "absolute" } }, "options": {}, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }
It's normal for _indextime to not exactly match _time since there's always a delay from event transmission and processing.  How big is the lag you see?
Hi @sainag_splunk , I probably didn't explain it right, the data that flows in is under the following sourcetypes - tenable:io:vuln tenable:io:assets tenable:io:plugin tenable:io:audit_logs And... See more...
Hi @sainag_splunk , I probably didn't explain it right, the data that flows in is under the following sourcetypes - tenable:io:vuln tenable:io:assets tenable:io:plugin tenable:io:audit_logs And the app Tenable App for Splunk at https://splunkbase.splunk.com/app/4061 seems to present only the tenable:io:vuln sourcetype. Are there any other presentations, by any chance, for the assets, plugin, and audit_logs data?
We're using the Tenable Add-on for Splunk (TA-tenable) to ingest data from Tenable.io. the app's props.conf, has the following - [tenable:io:vuln] DATETIME_CONFIG = NONE When we run the following S... See more...
We're using the Tenable Add-on for Splunk (TA-tenable) to ingest data from Tenable.io. the app's props.conf, has the following - [tenable:io:vuln] DATETIME_CONFIG = NONE When we run the following SPL: index=tenable sourcetype="tenable:io:vuln" | eval lag = _indextime - _time We are seeing non-zero lag values, even though I expect the lag to be zero if _time truly equals _indextime. If anything, I would expect DATETIME_CONFIG = CURRENT, what am I missing?
I noticed that i dont have splunk python SDK because in my python script i dont have import splunklib or import splunk.. i am using python script within the alteryx tool  
Hi, Thanks for your input where do i add this... in my search query this is how my URL looks like  https://server/services/search/jobs/export?search=search%20index%3Dcfs_apiconnect_102212%20%20%20... See more...
Hi, Thanks for your input where do i add this... in my search query this is how my URL looks like  https://server/services/search/jobs/export?search=search%20index%3Dcfs_apiconnect_102212%20%20%20%0Asourcetype%3D%22cfs_apigee_102212_st%22%20%20%0Aearliest%3D-1d%40d%20latest%3D%40d%20%0Aorganization%20IN%20(%22ccb-na%22%2C%22ccb-na-ext%22)%20%0AclientId%3D%22AMZ%22%20%0Astatus_code%3D200%0Aenvironment%3D%22XYZ-uat03%22%0A%7C%20table%20%20_time%2CclientId%2Corganization%2Cenvironment%2CproxyBasePath%2Capi_name&&output_mode=csv 
Thanks for pinging me.  @joemcmahon I haven't had many reasons to touch this app for quite  a while, but I'll try to assess how much work the upgrade would be to keep the app current.  No promises ... See more...
Thanks for pinging me.  @joemcmahon I haven't had many reasons to touch this app for quite  a while, but I'll try to assess how much work the upgrade would be to keep the app current.  No promises I can get to it this week, but I'll try soon.  Splunk is not really part of my regular work or hobby life anymore, so not something I put much focus on these days.
Hi, any guidance on this..
Hi @joemcmahon  The Upgrade Readiness App warning for the Modal Text Message App version 2.0.0 regarding jQuery 3.5 compatibility is valid. Splunk Enterprise 9.0 and later require apps to use jQuer... See more...
Hi @joemcmahon  The Upgrade Readiness App warning for the Modal Text Message App version 2.0.0 regarding jQuery 3.5 compatibility is valid. Splunk Enterprise 9.0 and later require apps to use jQuery 3.5.x or later due to security vulnerabilities in older versions. Version 2.0.0 of the Modal Text Message App uses an older version of jQuery, you may be able to contact the developer @rjthibod who may be able to resolve this with an updated version. Splunk announced with version 8.2.0 that apps need to move to jQuery 3.5 as older versions will be removed from future versions of Splunk. This was a number of versions ago so it may well be removed soon if not already which would cause the app to stop working. Check out https://docs.splunk.com/Documentation/UpgradejQuery/1/UpgradejQuery/jQueryOverview for more info.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
The Upgrade Readiness App is flagging version 2.0.0 Modal Text Message App for Splunk as not jQuery 3.5 compatible.  Is this issue a non-issue, or does the app need to be slightly altered? It's inst... See more...
The Upgrade Readiness App is flagging version 2.0.0 Modal Text Message App for Splunk as not jQuery 3.5 compatible.  Is this issue a non-issue, or does the app need to be slightly altered? It's installed in a 9.4.1 Splunk Enterprise environment.  
Hello @livehybrid  Im on Splunk Enterprise 9.2.1 and use this application :   Example xml :   regards
Help me with splunk query to monitor CPU and Memory utilized by splunk adhoc and alert searches
Hi @grca  Using the Splunk App for Databricks itself does not typically incur additional Splunk licensing costs beyond your existing Splunk Enterprise or Splunk Cloud license. However, any data inge... See more...
Hi @grca  Using the Splunk App for Databricks itself does not typically incur additional Splunk licensing costs beyond your existing Splunk Enterprise or Splunk Cloud license. However, any data ingested into Splunk from Databricks using this app will consume your Splunk license volume, just like any other data source. You should also ensure your Databricks license permits the connection and data access by external applications like Splunk. More info on the licensing specific to the Databricks app can be found at https://www.databricks.com/legal/db-license  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Are there any licencing concerns to be considered for the integration between SPlunk and Databricks using the Plugin here : https://splunkbase.splunk.com/app/5416 No redistribution or sale - just pl... See more...
Are there any licencing concerns to be considered for the integration between SPlunk and Databricks using the Plugin here : https://splunkbase.splunk.com/app/5416 No redistribution or sale - just plain connecting one environment to the other.
Hi @esllorj  In short - you cannot run an integrity check against buckets created before the integrity check was enabled, see the following community post: https://community.splunk.com/t5/Splunk-Ent... See more...
Hi @esllorj  In short - you cannot run an integrity check against buckets created before the integrity check was enabled, see the following community post: https://community.splunk.com/t5/Splunk-Enterprise/enable-integrity-control-on-splunk-6-3/m-p/266889#:~:text=Error%20description%20%22journal%20has%20no,Reason%3DJournal%20has%20no%20hashes. Credit to @dbhagi_splunk for their answer here: Data Integrity Control feature & the corresponding settings/commands only apply to the data that is indexed after turning on this feature. It won't go ahead & generate hashes (or even check integrity) for pre-existing data. So in the case where "./splunk check-integrity -index [index_name]" returned the following error, That means this bucket is not created/indexed with Data Integrity control feature enabled. Either it was created before you enabled it (assuming you turned on this feature for your index now) or you haven't enabled this feature for the index=index_name at all. Error description "journal has no hashes": This indicates that journal is not created with hashes enabled. Integrity check error for bucket with path=/opt/splunk/var/lib/splunk/index_name/db/db_1429532061_1429531988_278, Reason=Journal has no hashes. Same applies to "./splunk generate-hash-files -index [ index_name]" You would be able to generate (means, extracting the hashes embedded in the journal) only for data integrity control enabled buckets. This won't go and compute/create hashes for normal buckets without this feature enabled. Say you enabled the feature & you created few buckets, but you lost hash files of a particular bucket (someone modified or deleted them on disk), then you can run this command so that it again extract hashes & writes them to hash files (l1hashes_id_guid.dat, l2hash_id_guid.dat). Hope i answered all your questions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi splunkers,  My client wants to conduct a consistency check on all indexes that they collect So I added enableDataIntegrityControl=1 to every index setting and I created a script to run the comm... See more...
Hi splunkers,  My client wants to conduct a consistency check on all indexes that they collect So I added enableDataIntegrityControl=1 to every index setting and I created a script to run the command SPLUNK_CMD check-integrity -index "$INDEX" for all indexes. But that's where the problem comes from. The data we keep collecting in real time is that running a command during check-integrity fails.  ( ex linux_os logs, window_os logs) results are like this result server.conf/[sslConfig]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security disableSSLShutdown=0 Setting search process to have long life span: enable_search_process_long_lifespan=1 certificateStatusValidationMethod is not set, defaulting to none. Splunk is starting with EC-SSC disabled CMIndexId: New indexName=linux_os inserted, mapping to id=1 Operating on: idx=linux_os bucket='/opt/splunk/var/lib/splunk/linux_os/db/db_1737699472_1737699262_0' Integrity check error for bucket with path=/opt/splunk/var/lib/splunk/linux_os/db/db_1737699472_1737699262_0, Reason=Journal has no hashes. Operating on: idx=_audit bucket='/opt/splunk/var/lib/splunk/linux_os/db/hot_v1_1' Total buckets checked=2, succeeded=1, failed=1 Loaded latency_tracker_log_interval with value=30 from stanza=health_reporter Loaded aggregate_ingestion_latency_health with value=1 from stanza=health_reporter aggregate_ingestion_latency_health with value=1 from stanza=health_reporter will enable the aggregation of ingestion latency health reporter. Loaded ingestion_latency_send_interval_max with value=86400 from stanza=health_reporter Loaded ingestion_latency_send_interval with value=30 from stanza=health_reporter Is there a way to solve these problems?