All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can try the `showSplitSeries` option in the the dashboard studio to show each line/series as it's own chart. See this runanywhere sample dashboard:  { "title": "Test_Dynamic_Charting", "... See more...
You can try the `showSplitSeries` option in the the dashboard studio to show each line/series as it's own chart. See this runanywhere sample dashboard:  { "title": "Test_Dynamic_Charting", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_5sPrf0wX": { "dataSources": { "primary": "ds_jq4P4CeS" }, "options": { "showIndependentYRanges": true, "showSplitSeries": true, "yAxisMajorTickSize": 4 }, "type": "splunk.line" } }, "dataSources": { "ds_jq4P4CeS": { "name": "Search_1", "options": { "query": "index=_internal \n| eval sourcetype=sourcetype.\"##\".log_level\n| timechart count by sourcetype" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_5sPrf0wX", "position": { "h": 960, "w": 1240, "x": 0, "y": 0 }, "type": "block" } ], "type": "absolute" } }, "options": {}, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }
It's normal for _indextime to not exactly match _time since there's always a delay from event transmission and processing.  How big is the lag you see?
Hi @sainag_splunk , I probably didn't explain it right, the data that flows in is under the following sourcetypes - tenable:io:vuln tenable:io:assets tenable:io:plugin tenable:io:audit_logs And... See more...
Hi @sainag_splunk , I probably didn't explain it right, the data that flows in is under the following sourcetypes - tenable:io:vuln tenable:io:assets tenable:io:plugin tenable:io:audit_logs And the app Tenable App for Splunk at https://splunkbase.splunk.com/app/4061 seems to present only the tenable:io:vuln sourcetype. Are there any other presentations, by any chance, for the assets, plugin, and audit_logs data?
We're using the Tenable Add-on for Splunk (TA-tenable) to ingest data from Tenable.io. the app's props.conf, has the following - [tenable:io:vuln] DATETIME_CONFIG = NONE When we run the following S... See more...
We're using the Tenable Add-on for Splunk (TA-tenable) to ingest data from Tenable.io. the app's props.conf, has the following - [tenable:io:vuln] DATETIME_CONFIG = NONE When we run the following SPL: index=tenable sourcetype="tenable:io:vuln" | eval lag = _indextime - _time We are seeing non-zero lag values, even though I expect the lag to be zero if _time truly equals _indextime. If anything, I would expect DATETIME_CONFIG = CURRENT, what am I missing?
I noticed that i dont have splunk python SDK because in my python script i dont have import splunklib or import splunk.. i am using python script within the alteryx tool  
Hi, Thanks for your input where do i add this... in my search query this is how my URL looks like  https://server/services/search/jobs/export?search=search%20index%3Dcfs_apiconnect_102212%20%20%20... See more...
Hi, Thanks for your input where do i add this... in my search query this is how my URL looks like  https://server/services/search/jobs/export?search=search%20index%3Dcfs_apiconnect_102212%20%20%20%0Asourcetype%3D%22cfs_apigee_102212_st%22%20%20%0Aearliest%3D-1d%40d%20latest%3D%40d%20%0Aorganization%20IN%20(%22ccb-na%22%2C%22ccb-na-ext%22)%20%0AclientId%3D%22AMZ%22%20%0Astatus_code%3D200%0Aenvironment%3D%22XYZ-uat03%22%0A%7C%20table%20%20_time%2CclientId%2Corganization%2Cenvironment%2CproxyBasePath%2Capi_name&&output_mode=csv 
Thanks for pinging me.  @joemcmahon I haven't had many reasons to touch this app for quite  a while, but I'll try to assess how much work the upgrade would be to keep the app current.  No promises ... See more...
Thanks for pinging me.  @joemcmahon I haven't had many reasons to touch this app for quite  a while, but I'll try to assess how much work the upgrade would be to keep the app current.  No promises I can get to it this week, but I'll try soon.  Splunk is not really part of my regular work or hobby life anymore, so not something I put much focus on these days.
Hi, any guidance on this..
Hi @joemcmahon  The Upgrade Readiness App warning for the Modal Text Message App version 2.0.0 regarding jQuery 3.5 compatibility is valid. Splunk Enterprise 9.0 and later require apps to use jQuer... See more...
Hi @joemcmahon  The Upgrade Readiness App warning for the Modal Text Message App version 2.0.0 regarding jQuery 3.5 compatibility is valid. Splunk Enterprise 9.0 and later require apps to use jQuery 3.5.x or later due to security vulnerabilities in older versions. Version 2.0.0 of the Modal Text Message App uses an older version of jQuery, you may be able to contact the developer @rjthibod who may be able to resolve this with an updated version. Splunk announced with version 8.2.0 that apps need to move to jQuery 3.5 as older versions will be removed from future versions of Splunk. This was a number of versions ago so it may well be removed soon if not already which would cause the app to stop working. Check out https://docs.splunk.com/Documentation/UpgradejQuery/1/UpgradejQuery/jQueryOverview for more info.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
The Upgrade Readiness App is flagging version 2.0.0 Modal Text Message App for Splunk as not jQuery 3.5 compatible.  Is this issue a non-issue, or does the app need to be slightly altered? It's inst... See more...
The Upgrade Readiness App is flagging version 2.0.0 Modal Text Message App for Splunk as not jQuery 3.5 compatible.  Is this issue a non-issue, or does the app need to be slightly altered? It's installed in a 9.4.1 Splunk Enterprise environment.  
Hello @livehybrid  Im on Splunk Enterprise 9.2.1 and use this application :   Example xml :   regards
Help me with splunk query to monitor CPU and Memory utilized by splunk adhoc and alert searches
Hi @grca  Using the Splunk App for Databricks itself does not typically incur additional Splunk licensing costs beyond your existing Splunk Enterprise or Splunk Cloud license. However, any data inge... See more...
Hi @grca  Using the Splunk App for Databricks itself does not typically incur additional Splunk licensing costs beyond your existing Splunk Enterprise or Splunk Cloud license. However, any data ingested into Splunk from Databricks using this app will consume your Splunk license volume, just like any other data source. You should also ensure your Databricks license permits the connection and data access by external applications like Splunk. More info on the licensing specific to the Databricks app can be found at https://www.databricks.com/legal/db-license  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Are there any licencing concerns to be considered for the integration between SPlunk and Databricks using the Plugin here : https://splunkbase.splunk.com/app/5416 No redistribution or sale - just pl... See more...
Are there any licencing concerns to be considered for the integration between SPlunk and Databricks using the Plugin here : https://splunkbase.splunk.com/app/5416 No redistribution or sale - just plain connecting one environment to the other.
Hi @esllorj  In short - you cannot run an integrity check against buckets created before the integrity check was enabled, see the following community post: https://community.splunk.com/t5/Splunk-Ent... See more...
Hi @esllorj  In short - you cannot run an integrity check against buckets created before the integrity check was enabled, see the following community post: https://community.splunk.com/t5/Splunk-Enterprise/enable-integrity-control-on-splunk-6-3/m-p/266889#:~:text=Error%20description%20%22journal%20has%20no,Reason%3DJournal%20has%20no%20hashes. Credit to @dbhagi_splunk for their answer here: Data Integrity Control feature & the corresponding settings/commands only apply to the data that is indexed after turning on this feature. It won't go ahead & generate hashes (or even check integrity) for pre-existing data. So in the case where "./splunk check-integrity -index [index_name]" returned the following error, That means this bucket is not created/indexed with Data Integrity control feature enabled. Either it was created before you enabled it (assuming you turned on this feature for your index now) or you haven't enabled this feature for the index=index_name at all. Error description "journal has no hashes": This indicates that journal is not created with hashes enabled. Integrity check error for bucket with path=/opt/splunk/var/lib/splunk/index_name/db/db_1429532061_1429531988_278, Reason=Journal has no hashes. Same applies to "./splunk generate-hash-files -index [ index_name]" You would be able to generate (means, extracting the hashes embedded in the journal) only for data integrity control enabled buckets. This won't go and compute/create hashes for normal buckets without this feature enabled. Say you enabled the feature & you created few buckets, but you lost hash files of a particular bucket (someone modified or deleted them on disk), then you can run this command so that it again extract hashes & writes them to hash files (l1hashes_id_guid.dat, l2hash_id_guid.dat). Hope i answered all your questions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi splunkers,  My client wants to conduct a consistency check on all indexes that they collect So I added enableDataIntegrityControl=1 to every index setting and I created a script to run the comm... See more...
Hi splunkers,  My client wants to conduct a consistency check on all indexes that they collect So I added enableDataIntegrityControl=1 to every index setting and I created a script to run the command SPLUNK_CMD check-integrity -index "$INDEX" for all indexes. But that's where the problem comes from. The data we keep collecting in real time is that running a command during check-integrity fails.  ( ex linux_os logs, window_os logs) results are like this result server.conf/[sslConfig]/sslVerifyServerCert is false disabling certificate validation; must be set to "true" for increased security disableSSLShutdown=0 Setting search process to have long life span: enable_search_process_long_lifespan=1 certificateStatusValidationMethod is not set, defaulting to none. Splunk is starting with EC-SSC disabled CMIndexId: New indexName=linux_os inserted, mapping to id=1 Operating on: idx=linux_os bucket='/opt/splunk/var/lib/splunk/linux_os/db/db_1737699472_1737699262_0' Integrity check error for bucket with path=/opt/splunk/var/lib/splunk/linux_os/db/db_1737699472_1737699262_0, Reason=Journal has no hashes. Operating on: idx=_audit bucket='/opt/splunk/var/lib/splunk/linux_os/db/hot_v1_1' Total buckets checked=2, succeeded=1, failed=1 Loaded latency_tracker_log_interval with value=30 from stanza=health_reporter Loaded aggregate_ingestion_latency_health with value=1 from stanza=health_reporter aggregate_ingestion_latency_health with value=1 from stanza=health_reporter will enable the aggregation of ingestion latency health reporter. Loaded ingestion_latency_send_interval_max with value=86400 from stanza=health_reporter Loaded ingestion_latency_send_interval with value=30 from stanza=health_reporter Is there a way to solve these problems?
@kamal18sharma  Was it compatibility issue that you re-installed splunk enterprise? I am facing this issue with "Splunk app for Soar export" which i installed on SOAR app. Can you elaborate the solu... See more...
@kamal18sharma  Was it compatibility issue that you re-installed splunk enterprise? I am facing this issue with "Splunk app for Soar export" which i installed on SOAR app. Can you elaborate the solution.
Hi @Amira  Have you updated the cisco_sdwan_index macro to index=<yourIndexName> for the index containing the syslog data? Please could you confirm the sourcetypes you have in your cisco sdwan inde... See more...
Hi @Amira  Have you updated the cisco_sdwan_index macro to index=<yourIndexName> for the index containing the syslog data? Please could you confirm the sourcetypes you have in your cisco sdwan index?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
opt/caspida/bin/Caspida setuphadoop ...............................Failed to run sudo -u hdfs hdfs namenode -format >> /var/vcap/sys/log/caspida/caspida.out 2>&1 Fri Jun 2 17:06:11 +07 2023: [ERROR] ... See more...
opt/caspida/bin/Caspida setuphadoop ...............................Failed to run sudo -u hdfs hdfs namenode -format >> /var/vcap/sys/log/caspida/caspida.out 2>&1 Fri Jun 2 17:06:11 +07 2023: [ERROR] Failed to run hadoop_setup [2]. Fix errors and re-run again. Error in running /opt/caspida/bin/Caspida setuphadoop on 192.168.126.16. Fix errors and re-run again. i execute the command /opt/caspida/bin/caspida setup but stopped here hadoopsetup can't run, i don't know the cause yet. someone please help me. I put some install logs here
Hi @ASEP  The field value from a "Run Query" action in a Splunk SOAR playbook needs to be accessed from the list of results returned by the action. Simply adding the field name under "Fields to add ... See more...
Hi @ASEP  The field value from a "Run Query" action in a Splunk SOAR playbook needs to be accessed from the list of results returned by the action. Simply adding the field name under "Fields to add to output" makes the field available, but you still need to reference the correct result object. The Run Query action typically returns a list of results in the results.data attribute of the action's output. You need to access the specific result you are interested in (e.g., the first one) and then the field within that result. Assuming your "Run Query" action is named your_action_name, you can access the additional_action field from the first result using templating like this: {{ your_action_name.results.data[0].additional_action }}   You can then use this value in subsequent playbook logic, such as a decision block to check if it contains "teardown". The Run Query action returns a list of result objects in action_name.results.data. Each object in this list corresponds to a row returned by your Splunk query. You access elements in the list using square brackets [index] and fields within an object using dot notation .field_name. Check if the results.data list is not empty before attempting to access elements by index (like [0]) to prevent errors or None values if the query returns no results. I think you should be able to use {% if your_action_name.results.data %} block for this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing