All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I am having issues hiding some label tags on a bar chart in dashboard studio; Would like to hide the "app", "ABAP" and the x axis 0-100   I've been reading https://docs.splunk.co... See more...
Hi all, I am having issues hiding some label tags on a bar chart in dashboard studio; Would like to hide the "app", "ABAP" and the x axis 0-100   I've been reading https://docs.splunk.com/Documentation/DashApp/0.9.0/DashApp/Comparison - bar chart options and implemented most things I see relevant below without any success.   Thanks, Dan   Search: | makeresults count=33 | eval app="ABAP" | eval type="Green" | append [ makeresults count=24 | eval app="ABAP" | eval type="Red" ] | chart count by app, type | table app Red Green   Code:     "viz_A8xGX4KY": { "type": "splunk.bar", "dataSources": { "primary": "ds_rxzvxyTs_ds_D0zk0prm" }, "title": "", "options": { "stackMode": "stacked100", "seriesColorsByField": { "Green": "#52D017", "Red": "#E41B17" }, "legendDisplay": "off", "backgroundColor": "#152238", "xAxisTitleText": "", "axisTitleY.text": "", "axisTitleY2.text": "", "axisTitleX.visibility": "collapsed", "axisTitleY.visibility": "collapsed", "axisTitleY2.visibility": "collapsed", "axisY.scale": "", "axisY2.scale": "", "chart.showDataLabels": "none", "chart.overlayFields": "", "gridLinesY.showMajorLines": false, "gridLinesX.showMajorLines": false, "axisY.minimumNumber": "none" } }      
Hi All, Hope you are doing good!!  Basically we want to integrate trend micro vision one solution in our splunk. So before doing it I just wants to verify myself whether I know correct or not.   ... See more...
Hi All, Hope you are doing good!!  Basically we want to integrate trend micro vision one solution in our splunk. So before doing it I just wants to verify myself whether I know correct or not.   1. We need to install vision one application from splunk base. 2. After installation the app we need open that app and then click on configuration. 3. Then need to put url n authentication token. 4. Need to choose the log file type Then we will start receiving the data? Kindly let me know if my understanding is correct or not..   If my above understand is correct I want to know 1 things  How to create UC because we are using some 3D party software to onboard data now how we can write query and all, sorry im sounding armature but this is my first time..    Thanks Debjit  
I want  to make a script to auto set up and run UF.  Anyone have any idea to do it. I have made it but it need run in /opt/splunk/bin. I dont want that. I need it auto setup.
Logs not appearing in the panels
Hi, I have an annoying alert that is firing whenever 2 orphaned searches run on their cron schedule. I have reassigned orphaned searches in that past without issue but these two searches I cannot f... See more...
Hi, I have an annoying alert that is firing whenever 2 orphaned searches run on their cron schedule. I have reassigned orphaned searches in that past without issue but these two searches I cannot find in the all configs to reassign.    I can find the orphaned searches with the following query          | rest splunk_server=local /servicesNS/-/-/saved/searches add_orphan_field=yes count=0 | search orphan=1 disabled=0 is_scheduled=1 | eval status = if(disabled = 0, "enabled", "disabled") | fields title eai:acl.owner eai:acl.app eai:acl.sharing orphan status is_scheduled cron_schedule next_scheduled_time next_scheduled_time actions | rename title AS "search name" eai:acl.owner AS owner eai:acl.app AS app eai:acl.sharing AS sharing           When I go to settings > All configurations, set the search to All apps and owners, I cannot find the searches.... When I go to settings > All configs > Reassign KO > Orphaned, select to search all, (although there are loads of orphaned objects)  I cannot find these 2 searches causing the alerts. When I look on the shc cluster nodes in the /opt/splunk/etc/apps/<app_name>, I cannot find them either.....   However the MC health check says the orphaned objects are on all 3 of the shc nodes. I should also mention when I try to reassign other visible objects for these specific owners, it throws an error... "Could not find object..." Any advice greatly appreciated. Thank you    
One of my colleagues said we could be running different Splunk Enterprise versions on the CM and IDX Cluster.  For example, he said that we could run the IDX Cluster at Splunk 9.0.x and the CM at Spl... See more...
One of my colleagues said we could be running different Splunk Enterprise versions on the CM and IDX Cluster.  For example, he said that we could run the IDX Cluster at Splunk 9.0.x and the CM at Splunk 8.1.x.   This does not sound like a Splunk best practice to me.  But is it technically possible?  Do all Indexers need to be running the same version of Splunk?
Hi Community, we've setup the Splunk Add-On for ServiceNow to send Splunk events/alerts to our ServiceNow platform. While using OAuth 2.0 for the integration, we're facing a particular problem wher... See more...
Hi Community, we've setup the Splunk Add-On for ServiceNow to send Splunk events/alerts to our ServiceNow platform. While using OAuth 2.0 for the integration, we're facing a particular problem where no events are sent to ServiceNow and we get this error : Failed to create ticket. Return code is 401 (Unauthorized). Failure potentially caused by  expired access token The problem seems to be related to our Refresh Token Lifespan. Our security prerequesite is that Token should be refreshed every 60 minutes and is set that way in ServiceNow Application Registry. The default value is 100 days in ServiceNow which seems very high. In the service now documentation it mentions this : Configure the value of the Refresh Token Lifespan parameter as high as possible so that it does not expire. Once the refresh token expires, you have to reconfigure the account. Source Any idea why we're having this problem and the reason why we have to leave the value high? My understanding is that if the token expires, the add-on should be able to go get a new access token. Thanks!
I was looking at rsync to move some frozen buckets to another location.  One concern,  if rsync picks up new frozen data still being written  and removes the source file before splunk finishes. I c... See more...
I was looking at rsync to move some frozen buckets to another location.  One concern,  if rsync picks up new frozen data still being written  and removes the source file before splunk finishes. I could add find and mtime but this really slows down rsync. Concerns?  
Hello, I am inputting a file into Splunk showing the computers system information extracted from the command prompt. The data file I am inputting input Splunk looks like the first photo below, whe... See more...
Hello, I am inputting a file into Splunk showing the computers system information extracted from the command prompt. The data file I am inputting input Splunk looks like the first photo below, where I want the fields to be set as the values in the first column (circled in red) and their field values equal to their corresponding output value.   YETTT, when adding it to Splunk, it breaks down the system information file into three events (instead of 1), see image below. How do I merge these three events into one (match the text file uploaded) and set the fields equal to the systems characteristics seen in the first column (circled in blue)?  
Hey there,   Is there a chart somewhere to advise what the average CPU usage for a Splunk forwarder with average hardware is?   Like what should the CPU average be on a system with Processing... See more...
Hey there,   Is there a chart somewhere to advise what the average CPU usage for a Splunk forwarder with average hardware is?   Like what should the CPU average be on a system with Processing 1.5Ghz, RAM 512MB, Free Disk Space 5GB with only Splunk Forwarder installed, and no network connection? Answer something like 0.5 to 2%
My Access logs:  server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1 HTTP/1.1" 200 350 85 rex query: (?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^... See more...
My Access logs:  server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1 HTTP/1.1" 200 350 85 rex query: (?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^/]+)(?<uri_path>[^?\s]+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+) Search query with lookup *some query* | rex field=_raw "(?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^/]+)(?<uri_path>[^?\s]+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+)" | lookup abc.csv uri_path OUTPUT serviceName apiName I am using above query to lookup from csv file but not getting any results. In this lookup file i have these fields. apiName is the unique name in this csv file which i am trying to link with the uri_path but not able to do so. Is there a way to match this and produce result with both uri_path and api_name? can anyone please help me on this? serviceName uri_path http_method apiName /google /page1/page1a/633243463476/googlep1 post postusingRRR
I'm seeing errors in a search.log related to loadjob command and artifact replication occasionally failing for a report. There are two loadjob commands used in the scheduled report, Job-1 one fails w... See more...
I'm seeing errors in a search.log related to loadjob command and artifact replication occasionally failing for a report. There are two loadjob commands used in the scheduled report, Job-1 one fails when replication is needed and Job-2 replicates just fine. I have also switched the the loadjob command is called and have the same experience.  When replication isn't needed, the  loadjob for Job-1 does not error out and the report runs as expected. I'm not sure what to look at next. I replaced the ID's for readability.   12-14-2022 17:00:11.945 INFO SearchOperator:loadjob [55258 phase_1] - triggering artifact replication uri=https://127.0.0.1:8089/services/search/jobs/scheduler_<ID>/proxy?output_mode=json, uri_path=/services/search/jobs/scheduler_<ID>/proxy?output_mode=json 12-14-2022 17:00:12.396 ERROR HttpClientRequest [55258 phase_1] - Caught exception while parsing HTTP reply: String value too long. valueSize=524552, maxValueSize=524288 12-14-2022 17:00:12.396 ERROR SearchOperator:loadjob [55258 phase_1] - error accessing https://127.0.0.1:8089/services/search/jobs/scheduler_<ID>/proxy?output_mode=json, statusCode=502, description=OK
Hi Friends, I'm configuring mpstats command to get the each cpu core ideal value.  I have configured below in bin folder: cpucore_mpstat.sh mpstat -P ALL Input.cong: # This script will co... See more...
Hi Friends, I'm configuring mpstats command to get the each cpu core ideal value.  I have configured below in bin folder: cpucore_mpstat.sh mpstat -P ALL Input.cong: # This script will collect cpu utilization per core from mpstat command [script://./bin/cpucore_mpstat.sh] disabled = false interval = 120 source = server sourcetype = cpucore_mpstat index = pg_idx_whse_prod_events _meta = entity_type::NIX service_name::WHSE environment::PROD I can see the events like below:  I want red box highlighted column needs to display in dashboard.  Kindly help on this how to achieve this.   
hello guys, Is there any way that I could remove duplicate events that have same timestamp using this below search string:   index=* (EventCode=4624 OR EventCode=4625) | stats count(Keyword... See more...
hello guys, Is there any way that I could remove duplicate events that have same timestamp using this below search string:   index=* (EventCode=4624 OR EventCode=4625) | stats count(Keywords) as Attempts, count(eval(match(Keywords,"Audit Failure"))) as Failed, count(eval(match(Keywords,"Audit Success"))) as Success earliest(_time) as FirstAttempt latest(_time) as LatestAttempt by Account_Name | where Attempts>=5 AND Success>=1 AND Failed>=2 | eval FirstAttempt=strftime(FirstAttempt,"%x %X") | eval LatestAttempt=strftime(LatestAttempt,"%x %X")   The output: Account_Name Attempts Failed Success FirstAttempt LatestAttempt    
Splunk  is triggering System, WMI Provider process and causing lot of network traffic  at Start up. Disabling the Splunk service significantly improved the performance drastically. When Splunk is ena... See more...
Splunk  is triggering System, WMI Provider process and causing lot of network traffic  at Start up. Disabling the Splunk service significantly improved the performance drastically. When Splunk is enabled , the CPU stands at 100 % for 30 mins at boot, after disabling Splunk the CPU was at 100% only for 6 mins as then it dropped to normal usage. How can we fix this issue
Is there a filter or can I create one so that I can quickly see all Incidents owned by me?
Hello! Last week (12/8/2022) my license usage went through the roof, for one source type that used 24 GB. On the other hand, when looking at the sourcetype, there were no events pulled into Splunk... See more...
Hello! Last week (12/8/2022) my license usage went through the roof, for one source type that used 24 GB. On the other hand, when looking at the sourcetype, there were no events pulled into Splunk that day (no events since 9/16). What is the cause of this issue?? How can I see why our license usage went up?? Also -->events pulled in that day (12/8) were same number of events we get pulled in on an average day YET our license usage was at 24. Thank you.  
Mission Control: Can you send Incidents to another ticketing system such as ServiceNow or Jira?
Hello there, My Company's web server is generating too many logs and it is overwhelming the system, I was wondering how can Splunk help me manage the logs that are getting generated, and get the leas... See more...
Hello there, My Company's web server is generating too many logs and it is overwhelming the system, I was wondering how can Splunk help me manage the logs that are getting generated, and get the least and most important logs 
I'm trying to use where(isnotnull(mvfind(mvfield,field))) to search to see which records are part of a list. The fields are all strings, and some of them have parentheticals at the end. I noticed tha... See more...
I'm trying to use where(isnotnull(mvfind(mvfield,field))) to search to see which records are part of a list. The fields are all strings, and some of them have parentheticals at the end. I noticed that mvfind does not seem to capture these fields. To illustrate my point, try the following search.     | makeresults count=10 | streamstats count as n | eval n=n-1 | eval n=case(n<3,"Test (".n.")",n<6,"Test ".n,n<9,"(".n.")",1=1,n) | eventstats list(n) as mv | eval index=mvfind(mv,n)     When you do, you'll see that items 3-9 are captured, but 0-2 are not, even though the very values of n were used to generate the mv field. I currently have a workaround to just use rex commands to substitute different strings for the parenthesis, run my mvfind, and then use rex to substitute them back, but it feels a little ridiculous. Does anyone know why mvfind doesn't work here or a cleaner way to fix it?