All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi We're using splunk Enterprise Security V5.1.0. When i search in data models list, i can't find "Endpoint" data model. But there are a lot of correlation searches using this data model. I know th... See more...
Hi We're using splunk Enterprise Security V5.1.0. When i search in data models list, i can't find "Endpoint" data model. But there are a lot of correlation searches using this data model. I know that "Application State" data model is deprycated and "Endpoint" data model is used instead. Could you please correct my understanding?
hello team i am working on Enterprise Security version 5.1.0 (splunk version7.2.3) Although there are some correlation searches that use Endpoint Datamodel, in datamodel list I can not find any End... See more...
hello team i am working on Enterprise Security version 5.1.0 (splunk version7.2.3) Although there are some correlation searches that use Endpoint Datamodel, in datamodel list I can not find any Endpoint Datamodel ! for example I have Change Anaysis OR Application state Datamodel but there is no Endpoint Datamodel ! I Was wondering where the Endpoint DataSet/Datamodel is ? Or how can I add it ? Thanks  
One of the user is not able to receive any alerts if the user is trying to create an alert. However, If we create the same alert we are able to receive and marked the user as well in CC, he is able t... See more...
One of the user is not able to receive any alerts if the user is trying to create an alert. However, If we create the same alert we are able to receive and marked the user as well in CC, he is able to receive the alert. As part of the trouble shooting we could see that particular role has all the capabilities to schedule a search. However, We are still encountering issue. Need Help..If someone has inputs. That would be a great help.     @murbanek_splunk 
Hi Chaps, Need some help to understand why the alert is not getting triggered. This alerts query, when executed over 7 days period gives nonzero counts of 6 i.e. greater than 5(Our condition is trig... See more...
Hi Chaps, Need some help to understand why the alert is not getting triggered. This alerts query, when executed over 7 days period gives nonzero counts of 6 i.e. greater than 5(Our condition is trigger alert when nonzero counts exceeds 5). I see that alert is not getting even though we have nonzero count is 6.When we checked scheduler log Email action is blank.i have pasted the screen shot for reference.Please help me in this regards. Below is the query sourcetype="*" LOG_MESSAGE="Retry*" "Collections.NCS" NOT LOG_MESSAGE="Retry #1 *" | timechart span=10m count | autoregress count p=1-5 | eval nonzero=if(count > 0, if(count_p1 > 0, if(count_p2 > 0, if(count_p3 > 0, if(count_p4 > 0, if(count_p5 > 0, 6, 5), 4), 3), 2), 1), 0) | fields _time, nonzero   i see the nonzero counts which exceeds  5.in below screen shot    search query when we ran for over7 days of period   below is the scheduler log.i see alert_action is blank. 10-31-2020 08:10:07.566 +0000 INFO SavedSplunker - savedsearch_id="XXX;search; alert", search_type="", user="XXX", app="search", savedsearch_name="XXXX alert", priority=default, status=success, digest_mode=1, scheduled_time=1604131800, window_time=0, dispatch_time=1604131805, run_time=1.785, result_count=1015, alert_actions="", sid="scheduler__smadan__search__RMD5ab6a869ca92dbacc_at_1604131800_63960_638683B3-25D9-4D2A-AF2E-4E43362FDBFA", suppressed=0, thread_id="AlertNotifierWorker-0", workload_pool=""     Please find the alert condition:                        
hello I use a time field like this but I am unable to sort the time with descending sort How to do this please? | eval time = strftime(_time, "%m/%d/%Y %H:%M") | rename time as "Event time" | ta... See more...
hello I use a time field like this but I am unable to sort the time with descending sort How to do this please? | eval time = strftime(_time, "%m/%d/%Y %H:%M") | rename time as "Event time" | table "Event time" | sort "Event time"  
Folks, I would like to know what name of index you recommend for o365 audit logs via the Microsoft o365 add-on for splunk.  I know it doesn't matter, but I want to go with whatever most people are u... See more...
Folks, I would like to know what name of index you recommend for o365 audit logs via the Microsoft o365 add-on for splunk.  I know it doesn't matter, but I want to go with whatever most people are using.     
Hello all, My organization is using splunk cloud.  I log into splunk cloud to run searches and also access the enterprise security app from there as well. Given the above statement, are the below s... See more...
Hello all, My organization is using splunk cloud.  I log into splunk cloud to run searches and also access the enterprise security app from there as well. Given the above statement, are the below statements correct? 1. Splunk cloud is doing these roles:  indexer and search head 2. Splunk cloud is the same thing as splunk enterprise  
Two questions regarding Dynamic Data Storage:   1) Within an Index, can I archive a specific sourcetype only or can I only archive the entire index?   2) Let's say Index main has searchable time ... See more...
Two questions regarding Dynamic Data Storage:   1) Within an Index, can I archive a specific sourcetype only or can I only archive the entire index?   2) Let's say Index main has searchable time of 365 days.  I select dynamic data storage < Splunk Archive <  and specify Archive Retention Period of 365 days.  Does that mean when original event data reaches 365 days, it will move to splunk storage as Frozen and be available for another 365 days?  Why is there no option to define max size? 
Hello there. Within splunk cloud, I go to Settings < Indexes. I am looking at my main index.  It has a current size of 5TB and a searchable retention time of a year. Questions: 1) How much data c... See more...
Hello there. Within splunk cloud, I go to Settings < Indexes. I am looking at my main index.  It has a current size of 5TB and a searchable retention time of a year. Questions: 1) How much data can remain on this index before new data starts to overwrite old data? 2) How do I view the configurations of this index such as how long the data waits before it starts going through the aging bucket stage? 3) If searchable retention is a year, does that mean data after a year will go to frozen status?  
I would like a sanity check on if my plan is sound when it comes to my indexer cluster migration. Currently I'm changing the following: Migrating indexer cluster from old hardware to new hardware ... See more...
I would like a sanity check on if my plan is sound when it comes to my indexer cluster migration. Currently I'm changing the following: Migrating indexer cluster from old hardware to new hardware Implementing new indexes.conf to take advantage of volumes and to address changes in partitions Some misc notes: The indexers on on Linux and moving to a server with Linux Version is staying the same I have the following challenges with this that I need to address during the move: We presently have everything logging to /opt/splunk/var/lib/splunk..., this will be changed to take advantage of our new SSDs (/fastdisk) with cold/non priority data going to HDD (/slowdisk). Nothing will go to the old location on the new systems. The partition on new /opt is smaller than the one on new /opt, so a straight cp+paste won't work.  New indexes.conf brings new challenges to this migration making sure everything is correct. Plan: Rewrite Indexes.conf to include volumes, and for other changes made due to physical server move Rsync data from idx1 to idx1-new (while Splunk is running) (hot+warm+cold) Install Splunk (7.3.3) on idx1-new and copy config over Verify all ports are open on new system that need to be Mount NAS on new indexer (frozen data only) Update SPLUNK_DB in etc/splunk-launch.conf Re-enter bindDNpassword (or you will lockout your AD account in .../authentication.conf) Put CM in maintenance mode Turn off idx1, remove idx1 from cluster, do final rsync to idx1-new Change hostnames from idx1-new to idx1 (DNS) Place indexes.conf in .../etc/system/local (temporary) Start Splunk Add idx1-new to CM, restart Splunk on idx1-new Repeat for second indexer. Place new indexes.conf on CM, push out to indexers, remove ESL/indexes.conf How does that sound?
Hello All. I’m testing a SmartStore index with the configuration below. I’m getting errors from S3Client “no address associated with hostname” and CacheManger “unable to check if receipt exists at pa... See more...
Hello All. I’m testing a SmartStore index with the configuration below. I’m getting errors from S3Client “no address associated with hostname” and CacheManger “unable to check if receipt exists at path=xxxxx” and “network error”. In the MC, for SmartStore it just says OFFLINE. Does anyone have an idea which part of my configuration is the issue? Note: a test has been done to upload data to the bucket using the secret and access key provided  [volume:s3] storageType = remote path = s3://<bucket_name>/ remote.s3.endpoint = https://s3-accesspoint.us-east-1.amazon.aws/ remote.s3.access_key = <access_key> remote.s3.secret_key = <secret_key> [smartstore_index] remotePath = volume:s3/smartstore_index maxGlobalDataSizeMB = number frozenTimePeriodInSecs = number reFactor = auto homePath = volume:primary/smartstore_index/db coldPath = volume:secondary/smartstore_index/colddb homePath = $SPLUNK_DB/smartstore_index/thaweddb
Hi, Can someone help filter out a nested JSON value in a table? I have a search and SPATH command where I can't figure out how exclude stage{}.status=SUCCESS so that only failures are shown in the ... See more...
Hi, Can someone help filter out a nested JSON value in a table? I have a search and SPATH command where I can't figure out how exclude stage{}.status=SUCCESS so that only failures are shown in the table. Adding  stage{}.status!=SUCCESS doesn't work due to multiple nested JSON fields containing both SUCCESS and FAILURES. Here is the search I'm using: index="jenkins_statistics" event_tag=job_event type="completed" stages{}.status=FAILURE | spath stages{} output=Stages | table job_name job_result stages{}.name stages{}.status stages{}.error stages{}.error_type Thank you Here is how the table looks: Here is the raw event: {"job_type":"Pipeline","metadata":{"BITBUCKET_PR_ID":"","BRANCH_TO_BUILD":"","BRANCH_SOURCE":"","scm":"git"},"upstream":"","job_duration":261.361,"label":"nojobs","type":"completed","queue_time":9.549,"event_tag":"job_event","node":"(master)","job_name":"SomeSite/job/SomeJob_Build","test_summary":{"duration":0.0,"skips":0,"total":0,"failures":0,"passes":0},"stages":[{"duration":59.661,"pause_duration":0.0,"start_time":1603984902,"children":[{"duration":0.002,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984902,"name":"Print Message","id":"13","status":"SUCCESS"},{"duration":0.004,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984902,"name":"Print Message","id":"14","status":"SUCCESS"},{"duration":0.019,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984902,"name":"Notify a build status to BitBucket.","id":"15","status":"SUCCESS"},{"duration":0.006,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984902,"name":"Print Message","id":"16","status":"SUCCESS"},{"duration":55.35,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984902,"name":"Check out from version control","id":"17","status":"SUCCESS"},{"duration":0.015,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984957,"name":"Print Message","id":"18","status":"SUCCESS"},{"duration":0.484,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984957,"name":"Shell Script","id":"19","status":"SUCCESS"},{"duration":0.004,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984958,"name":"Print Message","id":"20","status":"SUCCESS"},{"duration":0.652,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984958,"name":"Shell Script","id":"21","status":"SUCCESS"},{"duration":0.014,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984958,"name":"Print Message","id":"22","status":"SUCCESS"},{"duration":0.607,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984958,"name":"Shell Script","id":"23","status":"SUCCESS"},{"duration":0.014,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984959,"name":"Print Message","id":"24","status":"SUCCESS"},{"duration":0.013,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984959,"name":"Print Message","id":"25","status":"SUCCESS"},{"duration":0.553,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984959,"name":"Shell Script","arguments":"touch code_version.properties","id":"26","status":"SUCCESS"},{"duration":0.564,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984959,"name":"Shell Script","arguments":"\n cat <<EOF > code_version.properties \n GIT_REVISION=$GIT_REVISION\n BITBUCKET_PULL_REQUEST_ID=$BITBUCKET_PULL_REQUEST_ID\n CODE_VERSION=$CODE_VERSION\n GIT_BRANCH_LOCAL=$GIT_BRANCH_LOCAL\n GIT_BRANCH=$GIT_BRANCH\n BB_REPO=$BB_REPO\n BB_CREDS=$BB_CREDS\n OUTLOOK_WEBHOOK=$OUTLOOK_URL\n EOF","id":"27","status":"SUCCESS"},{"duration":0.62,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984960,"name":"Shell Script","arguments":"cat code_version.properties","id":"28","status":"SUCCESS"},{"duration":0.003,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984961,"name":"Print Message","id":"29","status":"SUCCESS"},{"duration":0.032,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984961,"name":"General Build Step","arguments":"MASKED_VALUE","id":"30","status":"SUCCESS"},{"duration":0.443,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984961,"name":"Notify a build status to BitBucket.","id":"31","status":"SUCCESS"}],"name":"CheckoutLogic","id":"10","status":"SUCCESS"},{"duration":112.969,"pause_duration":0.0,"start_time":1603984961,"children":[{"duration":112.809,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603984961,"name":"Shell Script","arguments":"npm i","id":"38","status":"SUCCESS"}],"name":"Install Dependancies","id":"37","status":"SUCCESS"},{"duration":0.021,"pause_duration":0.0,"start_time":1603985074,"name":"Test","id":"42","status":"SUCCESS"},{"duration":0.016,"pause_duration":0.0,"start_time":1603985074,"name":"Branch: Lint, Unit Test","id":"45","status":"SUCCESS"},{"duration":47.47,"pause_duration":0.0,"start_time":1603985074,"children":[{"duration":0.003,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603985074,"name":"Print Message","id":"59","status":"SUCCESS"},{"duration":0.004,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603985074,"name":"Print Message","id":"60","status":"SUCCESS"},{"duration":25.865,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603985074,"name":"Shell Script","arguments":"npm run lint:junit","id":"61","status":"SUCCESS"},{"duration":19.642,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603985100,"error_type":"java.io.IOException","name":"General Build Step","id":"67","error":"MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite","status":"FAILURE"},{"duration":0.879,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603985120,"name":"Shell Script","id":"70","status":"SUCCESS"},{"duration":0.541,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603985121,"name":"Notify a build status to BitBucket.","id":"73","status":"SUCCESS"}],"error_type":"java.io.IOException","name":"Lint, Unit Test","id":"48","error":"MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite","status":"FAILURE"},{"duration":0.019,"pause_duration":0.0,"start_time":1603985074,"name":"Branch: Sonar Analysis","id":"46","status":"SUCCESS"},{"duration":0.054,"pause_duration":0.0,"start_time":1603985074,"name":"Sonar Analysis","id":"50","status":"SUCCESS"},{"duration":23.417,"pause_duration":0.0,"start_time":1603985074,"children":[{"duration":22.203,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603985074,"name":"Shell Script","arguments":"curl -k https://SomeJenNode.com:1234/job/Dev_Site/job/scan/lastBuild/consoleFull | sed \"s#<span class=\"timestamp\"><b>##g;s#</b> </span># #g\"","id":"58","status":"SUCCESS"}],"name":"Building Dev_Site » Sonar_scan","id":"57","status":"SUCCESS"},{"duration":0.137,"pause_duration":0.0,"start_time":1603985122,"error_type":"java.io.IOException","name":"Build Code","id":"85","error":"MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite","status":"FAILURE"},{"duration":0.15,"pause_duration":0.0,"start_time":1603985122,"error_type":"java.io.IOException","name":"Package","id":"89","error":"MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite","status":"FAILURE"},{"duration":0.156,"pause_duration":0.0,"start_time":1603985122,"error_type":"java.io.IOException","name":"Zip Test Data","id":"93","error":"MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite","status":"FAILURE"},{"duration":0.218,"pause_duration":0.0,"start_time":1603985122,"error_type":"java.io.IOException","name":"Prepare Artifacts","id":"97","error":"MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite","status":"FAILURE"},{"duration":38.86,"pause_duration":0.0,"start_time":1603985123,"children":[{"duration":0.339,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603985123,"name":"Notify a build status to BitBucket.","id":"102","status":"SUCCESS"},{"duration":38.473,"pause_duration":0.0,"exec_node":"SomeJenNode","start_time":1603985123,"name":"Delete workspace when build is done","id":"103","status":"SUCCESS"}],"name":"Declarative: Post Actions","id":"101","status":"SUCCESS"}],"build_number":1234,"job_result":"FAILURE","trigger_by":"Started by user John Doe: Bitbucket PPR: pull request updated","scm":"git","user":"(scm)","build_url":"job/SomeSite/job/SomeJob_Build/1234/","queue_id":258942,"job_started_at":"2020-10-29T15:21:40Z"} Here is a highlighted Example: 10/29/20 11:26:02.217 AM { [-] build_number: 1234 build_url: job/SomeSite/job/SomeJob_Build/1234/ event_tag: job_event job_duration: 261.361 job_name: SomeSite/SomeJob_Build job_result: FAILURE job_started_at: 2020-10-29T15:21:40Z job_type: Pipeline label: nojobs metadata: { [-] BITBUCKET_PR_ID: BRANCH_SOURCE: BRANCH_TO_BUILD: scm: git } node: (master) queue_id: 123456 queue_time: 9.549 scm: git stages: [ [-] { [-] children: [ [+] ] duration: 59.661 id: 10 name: CheckoutLogic pause_duration: 0 start_time: 1603984902 status: SUCCESS } { [-] children: [ [+] ] duration: 112.969 id: 37 name: Install Dependancies pause_duration: 0 start_time: 1603984961 status: SUCCESS } { [-] duration: 0.021 id: 42 name: Test pause_duration: 0 start_time: 1603985074 status: SUCCESS } { [-] duration: 0.016 id: 45 name: Branch: Lint, Unit Test pause_duration: 0 start_time: 1603985074 status: SUCCESS } { [-] children: [ [+] ] duration: 47.47 error: MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite error_type: java.io.IOException id: 48 name: Lint, Unit Test pause_duration: 0 start_time: 1603985074 status: FAILURE } { [-] duration: 0.019 id: 46 name: Branch: Some Analysis pause_duration: 0 start_time: 1603985074 status: SUCCESS } { [-] duration: 0.054 id: 50 name: Some Analysis pause_duration: 0 start_time: 1603985074 status: SUCCESS } { [-] children: [ [+] ] duration: 23.417 id: 57 name: Building Dev_Site » Some_scan pause_duration: 0 start_time: 1603985074 status: SUCCESS } { [-] duration: 0.137 error: MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite error_type: java.io.IOException id: 85 name: Build Code pause_duration: 0 start_time: 1603985122 status: FAILURE } { [-] duration: 0.15 error: MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite error_type: java.io.IOException id: 89 name: Package pause_duration: 0 start_time: 1603985122 status: FAILURE } { [-] duration: 0.156 error: MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite error_type: java.io.IOException id: 93 name: Zip Test Data pause_duration: 0 start_time: 1603985122 status: FAILURE } { [-] duration: 0.218 error: MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite error_type: java.io.IOException id: 97 name: Prepare Artifacts pause_duration: 0 start_time: 1603985122 status: FAILURE } { [-] children: [ [+] ] duration: 38.86 id: 101 name: Declarative: Post Actions pause_duration: 0 start_time: 1603985123 status: SUCCESS } ] test_summary: { [+] } trigger_by: Started by user John Doe: Bitbucket PPR: pull request updated type: completed upstream: user: (scm) } 10/29/20 11:26:02.217 AM { [-] build_number: 2999 build_url: job/Dev_Site/job/SomeSite_Build/2999/ event_tag: job_event job_duration: 261.361 job_name: Dev_Site/SomeSite_Build job_result: FAILURE job_started_at: 2020-10-29T15:21:40Z job_type: Pipeline label: nojobs metadata: { [-] BITBUCKET_PR_ID: BRANCH_SOURCE: BRANCH_TO_BUILD: scm: git } node: (master) queue_id: 123456 queue_time: 9.549 scm: git stages: [ [-] { [-] children: [ [+] ] duration: 59.661 id: 10 name: CheckoutLogic pause_duration: 0 start_time: 1603984902 status: SUCCESS } { [-] children: [ [+] ] duration: 112.969 id: 37 name: Install Dependancies pause_duration: 0 start_time: 1603984961 status: SUCCESS } { [-] duration: 0.021 id: 42 name: Test pause_duration: 0 start_time: 1603985074 status: SUCCESS } { [-] duration: 0.016 id: 45 name: Branch: Lint, Unit Test pause_duration: 0 start_time: 1603985074 status: SUCCESS } { [-] children: [ [+] ] duration: 47.47 error: MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite error_type: java.io.IOException id: 48 name: Lint, Unit Test pause_duration: 0 start_time: 1603985074 status: FAILURE } { [-] duration: 0.019 id: 46 name: Branch: Sonar Analysis pause_duration: 0 start_time: 1603985074 status: SUCCESS } { [-] duration: 0.054 id: 50 name: Sonar Analysis pause_duration: 0 start_time: 1603985074 status: SUCCESS } { [-] children: [ [+] ] duration: 23.417 id: 57 name: Building Dev_Site » Some_scan pause_duration: 0 start_time: 1603985074 status: SUCCESS } { [-] duration: 0.137 error: MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite error_type: java.io.IOException id: 85 name: Build Code pause_duration: 0 start_time: 1603985122 status: FAILURE } { [-] duration: 0.15 error: MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite error_type: java.io.IOException id: 89 name: Package pause_duration: 0 start_time: 1603985122 status: FAILURE } { [-] duration: 0.156 error: MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite error_type: java.io.IOException id: 93 name: Zip Test Data pause_duration: 0 start_time: 1603985122 status: FAILURE } { [-] duration: 0.218 error: MicrosoftAzureStorage - Error occurred while uploading to Azure - SomeDevSite error_type: java.io.IOException id: 97 name: Prepare Artifacts pause_duration: 0 start_time: 1603985122 status: FAILURE } { [-] children: [ [+] ] duration: 38.86 id: 101 name: Declarative: Post Actions pause_duration: 0 start_time: 1603985123 status: SUCCESS } ] test_summary: { [+] } trigger_by: Started by user John Doe: Bitbucket PPR: pull request updated type: completed upstream: user: (scm) }  
I am looking to track the volume of how often users access specific areas of an application. What I am finding is the lower limit goes negative, which I do not want. I reset this to 0 if it below 0 b... See more...
I am looking to track the volume of how often users access specific areas of an application. What I am finding is the lower limit goes negative, which I do not want. I reset this to 0 if it below 0 but when doing so I lose how the visualization looks for the traditional prediction command; I no longer get the shaded area with the prediction line. I get four individual lines for the chart. Is there any way to force the normal predict visual to be shown?     |timechart span=1h count as Vol |predict Vol as PredictedVol algorithm=LLP5 upper90=high lower95=low holdback 4 future_timespan=8 |rename low(PredictedVol) as low, high(PredictedVol) as high |eval low=if(low,0,low)      
Hi All, Does anyone know the exact order index parsing is completed?  Reason being, i have a 1 log file that i'd like to parse two different time stamps from.  I was going to assign source type A to... See more...
Hi All, Does anyone know the exact order index parsing is completed?  Reason being, i have a 1 log file that i'd like to parse two different time stamps from.  I was going to assign source type A to it, then at parsing use transforms to either assign source type "A:A" or "A:B" to it and pull the time from there.  However it appears timestamps are pulled before this step.   Thoughts?
Hi all, I'm trying to populate a dropdown with dynamic values according the value coming from another dropdown, like unit measure for example: if the dropdown A contains Fruits the B will contain (... See more...
Hi all, I'm trying to populate a dropdown with dynamic values according the value coming from another dropdown, like unit measure for example: if the dropdown A contains Fruits the B will contain (1000 gr, 100 hg, 1 kG ) instead if the A cointains Liquid B will cointains (1000 ml. 100 cl. ) where 1000/100/1 will be Field For Value and gr hg and KG will be field for Label.   I'm trying something like this : | makeresults | eval type="fruits" | eval unit=case(type=="fruits","1000 gr, 100 hg, 1 kg", type=="liquid","1000 ml, 100 cl", 1=1, "no vlaue") | rex field=unit "(?<A>.*)\,(?<B>.*)" | fields A B But I'm not be able to parameterize the two or three values in the fields to make the split.. Thanks  
I have installed the Splunk Cloud Gateway app and registered my device and apps for mobile devices.  But when I use a major of the dashboards on my mobile device I see the following exception on the ... See more...
I have installed the Splunk Cloud Gateway app and registered my device and apps for mobile devices.  But when I use a major of the dashboards on my mobile device I see the following exception on the cloud gateway app dashboard. 2020-10-30 13:15:16,214 ERROR [dashboard_request_processor] [dashboard_request_processor] [fetch_search_job_results_visualization_data] [35913] Failed to get search job results search_id=4d646b7793ce9e18cb8a6b82b9c05f02a03ca7ec06d9529006333abb947e32d8, response.code=400, response.text={"messages":[{"type":"FATAL","text":"Error in 'rest' command: Invalid argument: '$datatype$'"}]} I'm running Splunk Enterprise 7.3.3 on Linux and Splunk Cloud Gateway v1.13.  The connectivity test with curl to the websockets and gateway server work just fine.  I have green status on the gateway dashboard and message statistics.   Any suggestions are appreciated. 
We had someone bring down a search head cluster member the other day; the user had inadvertently used "ndex=myindex" instead of "index=myindex".  I.e., a typo that led Splunk to try to match against ... See more...
We had someone bring down a search head cluster member the other day; the user had inadvertently used "ndex=myindex" instead of "index=myindex".  I.e., a typo that led Splunk to try to match against all default indexes for the user. I'd like to be able to create a Workload Management rule that de-prioritizes such searches (i.e., whether the omission of index is intentional or not). Obviously, matching on "index=*" doesn't work for this scenario.  And 'NOT index' would match instances of index way more than the intended instance at the beginning of the search. (Also recognizing that index specification is not always "index=" it can also be "index IN(".) Further, many legit searches don't include 'index' -- e.g., tstats, inputlookup, pivot, rest, makeresults, etc.  Additionally, folks are using macros which hide index designations. Upshot: How can I create a Workload Management rule that would prevent the use of ndex (instead of index) from bringing down an SHC member?
Hi Splunk experts I need one help, the splunk search is giving me duplicate entries when I do a search. I have made sure that there are no duplicate events and I have also used dedup in my search. S... See more...
Hi Splunk experts I need one help, the splunk search is giving me duplicate entries when I do a search. I have made sure that there are no duplicate events and I have also used dedup in my search. Still it gives me duplicates. Need your help. See the attached image. Could you please let me know what could be the issue? Thanks and best regards Krishna
I have a huge proxy logs for which need to create a monthly report since Jan 2019. Is there a way I can create the search that runs quick and I am able to create the table. My current search  index... See more...
I have a huge proxy logs for which need to create a monthly report since Jan 2019. Is there a way I can create the search that runs quick and I am able to create the table. My current search  index=xyz | timechart span=1mon count by action  runs very slow for this huge data. I have also tried the datamodel route and yet the table is not getting created. Is there a way to create for each month and consolidate together in a optimized way?
I am trying to send an email with the help of the make results command in the splunk search but I am not receiving the email and getting the below error. Error: 2020-10-30 12:45:21,129 -0400 ER... See more...
I am trying to send an email with the help of the make results command in the splunk search but I am not receiving the email and getting the below error. Error: 2020-10-30 12:45:21,129 -0400 ERROR sendemail:1428 - [HTTP 403] Client is not authorized to perform requested action; https://127.0.0.1:8289/servicesNS/admin/cams/search/jobs/subsearch_asfd29470124adsfa319841023e?output_mode=json Traceback (most recent call last): File "/app/splunk/etc/apps/search/bin/sendemail.py", line 1421, in <module> results = sendEmail(results, settings, keywords, argvals) File "/app/splunk/etc/apps/search/bin/sendemail.py", line 400, in sendEmail jobResponseHeaders, jobResponseBody = simpleRequest(uriToJob, method='GET', getargs={'output_mode':'json'}, sessionKey=sessionKey) File "/app/splunk/lib/python2.7/site-packages/splunk/rest/__init__.py", line 559, in simpleRequest raise splunk.AuthorizationFailed(extendedMessages=uri) AuthorizationFailed: [HTTP 403] Client is not authorized to perform requested action; https://127.0.0.1:8289/servicesNS/admin/cams/search/jobs/subsearch_asdfasljfd9147192034ejdlajff?output_mode=json Query <My query> | map search="| makeresults | eval attribute=\"$value$\" | table attribute | sendemail to=\"myemail@id.com\" content_type=\"html\" message=\"Test message\"" Any help would be appreciated and Thanks in advance.