All Topics

Top

All Topics

Hi, I need to connect to splunk using Databricks and then Read tables from Splunk in Databricks. How do I do it?. I prefer help with connection first. Thanks & Regards  
Hi All, i have a field "last_seen" which shows date in the below format .    My requirement is to compare today's date against this last_seen date and show only those events which is 3 days before to... See more...
Hi All, i have a field "last_seen" which shows date in the below format .    My requirement is to compare today's date against this last_seen date and show only those events which is 3 days before today's date last_seen 2022-12-15T19:46:55Z 2022-12-14T19:46:55Z 2022-12-11T19:46:55Z   I thought of calculating first a field that shows me the date   3 days before this last_seen value and then further doing a |where condition to show me the results.  I tried the below calculation  but deltaDays is  coming out empty as splunk shows it blank . So the formula of now()-last_seen isn't working.    Reference: https://community.splunk.com/t5/Splunk-Search/Display-events-when-current-date-is-gt-30-days-from-expiration/m-p/249876         | eval deltaDays = (now() - last_seen)/86400 | where deltaDays >=3 | table last_seen deltaDays         Expected results ( given todays date is Dec 16).  Show results from 3 days before last_seen 2022-12-11T19:46:55Z
Reference post  https://community.splunk.com/t5/Splunk-Search/How-to-align-events-returned-by-two-separate-searches-in-a-table/m-p/475647#M133670  Hi Team,  I have this similar use from above p... See more...
Reference post  https://community.splunk.com/t5/Splunk-Search/How-to-align-events-returned-by-two-separate-searches-in-a-table/m-p/475647#M133670  Hi Team,  I have this similar use from above post case but cannot get the provided solution to work.  Following is my query: index=_audit action=alert_fired | lookup splunkalerts "Alert Name" as ss_name OUTPUT "CMDB Application Name" AS Application | search Application="Test" | stats count(triggered_alerts) as triggered_alerts by Application ss_name severity | rename ss_name as "Alert Name" severity as "Severity" | appendcols [ | rest /servicesNS/-/-/saved/searches timeout=120 | lookup splunkalerts "Alert Name" AS title OUTPUT "CMDB Application Name" AS Application "Pipeline Alert" "PROD?" | search Application="Test" | rename eai:acl.owner as owner title as "Alert Name" | fields owner "Alert Name" Application,"Pipeline Alert","PROD?",alert.track,disabled,search,action.email.to,cron_schedule] | table Application,"Alert Name",Severity,triggered_alerts,"PROD?","Pipeline Alert",alert.track,disabled,search,action.email.to,cron_schedule The issue I am having with the above query is that the triggered_alerts count returned from the outer query is not aligned with the search field value returned from the sub search after the appendcols.
I Got "WARNING web interface does not seem to be available" message when I started Splunk. Although I got the above message I can login to Splunk via webpage and run search as expected. Is there an... See more...
I Got "WARNING web interface does not seem to be available" message when I started Splunk. Although I got the above message I can login to Splunk via webpage and run search as expected. Is there anything I need to do to avoid this message.
Hi all, I am having issues hiding some label tags on a bar chart in dashboard studio; Would like to hide the "app", "ABAP" and the x axis 0-100   I've been reading https://docs.splunk.co... See more...
Hi all, I am having issues hiding some label tags on a bar chart in dashboard studio; Would like to hide the "app", "ABAP" and the x axis 0-100   I've been reading https://docs.splunk.com/Documentation/DashApp/0.9.0/DashApp/Comparison - bar chart options and implemented most things I see relevant below without any success.   Thanks, Dan   Search: | makeresults count=33 | eval app="ABAP" | eval type="Green" | append [ makeresults count=24 | eval app="ABAP" | eval type="Red" ] | chart count by app, type | table app Red Green   Code:     "viz_A8xGX4KY": { "type": "splunk.bar", "dataSources": { "primary": "ds_rxzvxyTs_ds_D0zk0prm" }, "title": "", "options": { "stackMode": "stacked100", "seriesColorsByField": { "Green": "#52D017", "Red": "#E41B17" }, "legendDisplay": "off", "backgroundColor": "#152238", "xAxisTitleText": "", "axisTitleY.text": "", "axisTitleY2.text": "", "axisTitleX.visibility": "collapsed", "axisTitleY.visibility": "collapsed", "axisTitleY2.visibility": "collapsed", "axisY.scale": "", "axisY2.scale": "", "chart.showDataLabels": "none", "chart.overlayFields": "", "gridLinesY.showMajorLines": false, "gridLinesX.showMajorLines": false, "axisY.minimumNumber": "none" } }      
Hi All, Hope you are doing good!!  Basically we want to integrate trend micro vision one solution in our splunk. So before doing it I just wants to verify myself whether I know correct or not.   ... See more...
Hi All, Hope you are doing good!!  Basically we want to integrate trend micro vision one solution in our splunk. So before doing it I just wants to verify myself whether I know correct or not.   1. We need to install vision one application from splunk base. 2. After installation the app we need open that app and then click on configuration. 3. Then need to put url n authentication token. 4. Need to choose the log file type Then we will start receiving the data? Kindly let me know if my understanding is correct or not..   If my above understand is correct I want to know 1 things  How to create UC because we are using some 3D party software to onboard data now how we can write query and all, sorry im sounding armature but this is my first time..    Thanks Debjit  
I want  to make a script to auto set up and run UF.  Anyone have any idea to do it. I have made it but it need run in /opt/splunk/bin. I dont want that. I need it auto setup.
Logs not appearing in the panels
Hi, I have an annoying alert that is firing whenever 2 orphaned searches run on their cron schedule. I have reassigned orphaned searches in that past without issue but these two searches I cannot f... See more...
Hi, I have an annoying alert that is firing whenever 2 orphaned searches run on their cron schedule. I have reassigned orphaned searches in that past without issue but these two searches I cannot find in the all configs to reassign.    I can find the orphaned searches with the following query          | rest splunk_server=local /servicesNS/-/-/saved/searches add_orphan_field=yes count=0 | search orphan=1 disabled=0 is_scheduled=1 | eval status = if(disabled = 0, "enabled", "disabled") | fields title eai:acl.owner eai:acl.app eai:acl.sharing orphan status is_scheduled cron_schedule next_scheduled_time next_scheduled_time actions | rename title AS "search name" eai:acl.owner AS owner eai:acl.app AS app eai:acl.sharing AS sharing           When I go to settings > All configurations, set the search to All apps and owners, I cannot find the searches.... When I go to settings > All configs > Reassign KO > Orphaned, select to search all, (although there are loads of orphaned objects)  I cannot find these 2 searches causing the alerts. When I look on the shc cluster nodes in the /opt/splunk/etc/apps/<app_name>, I cannot find them either.....   However the MC health check says the orphaned objects are on all 3 of the shc nodes. I should also mention when I try to reassign other visible objects for these specific owners, it throws an error... "Could not find object..." Any advice greatly appreciated. Thank you    
One of my colleagues said we could be running different Splunk Enterprise versions on the CM and IDX Cluster.  For example, he said that we could run the IDX Cluster at Splunk 9.0.x and the CM at Spl... See more...
One of my colleagues said we could be running different Splunk Enterprise versions on the CM and IDX Cluster.  For example, he said that we could run the IDX Cluster at Splunk 9.0.x and the CM at Splunk 8.1.x.   This does not sound like a Splunk best practice to me.  But is it technically possible?  Do all Indexers need to be running the same version of Splunk?
Hi Community, we've setup the Splunk Add-On for ServiceNow to send Splunk events/alerts to our ServiceNow platform. While using OAuth 2.0 for the integration, we're facing a particular problem wher... See more...
Hi Community, we've setup the Splunk Add-On for ServiceNow to send Splunk events/alerts to our ServiceNow platform. While using OAuth 2.0 for the integration, we're facing a particular problem where no events are sent to ServiceNow and we get this error : Failed to create ticket. Return code is 401 (Unauthorized). Failure potentially caused by  expired access token The problem seems to be related to our Refresh Token Lifespan. Our security prerequesite is that Token should be refreshed every 60 minutes and is set that way in ServiceNow Application Registry. The default value is 100 days in ServiceNow which seems very high. In the service now documentation it mentions this : Configure the value of the Refresh Token Lifespan parameter as high as possible so that it does not expire. Once the refresh token expires, you have to reconfigure the account. Source Any idea why we're having this problem and the reason why we have to leave the value high? My understanding is that if the token expires, the add-on should be able to go get a new access token. Thanks!
I was looking at rsync to move some frozen buckets to another location.  One concern,  if rsync picks up new frozen data still being written  and removes the source file before splunk finishes. I c... See more...
I was looking at rsync to move some frozen buckets to another location.  One concern,  if rsync picks up new frozen data still being written  and removes the source file before splunk finishes. I could add find and mtime but this really slows down rsync. Concerns?  
Hello, I am inputting a file into Splunk showing the computers system information extracted from the command prompt. The data file I am inputting input Splunk looks like the first photo below, whe... See more...
Hello, I am inputting a file into Splunk showing the computers system information extracted from the command prompt. The data file I am inputting input Splunk looks like the first photo below, where I want the fields to be set as the values in the first column (circled in red) and their field values equal to their corresponding output value.   YETTT, when adding it to Splunk, it breaks down the system information file into three events (instead of 1), see image below. How do I merge these three events into one (match the text file uploaded) and set the fields equal to the systems characteristics seen in the first column (circled in blue)?  
Hey there,   Is there a chart somewhere to advise what the average CPU usage for a Splunk forwarder with average hardware is?   Like what should the CPU average be on a system with Processing... See more...
Hey there,   Is there a chart somewhere to advise what the average CPU usage for a Splunk forwarder with average hardware is?   Like what should the CPU average be on a system with Processing 1.5Ghz, RAM 512MB, Free Disk Space 5GB with only Splunk Forwarder installed, and no network connection? Answer something like 0.5 to 2%
My Access logs:  server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1 HTTP/1.1" 200 350 85 rex query: (?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^... See more...
My Access logs:  server - - [date& time] "GET /google/page1/page1a/633243463476/googlep1 HTTP/1.1" 200 350 85 rex query: (?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^/]+)(?<uri_path>[^?\s]+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+) Search query with lookup *some query* | rex field=_raw "(?<SRC>\d+\.\d+\.\d+\.\d+).+\]\s\"(?<http_method>\w+)\s(?<serviceName>/[^/]+)(?<uri_path>[^?\s]+)\s(?<uri_query>\S+)\"\s(?<statusCode>\d+)\s(?<body_size>\d+)\s\s(?<response_time>\d+)" | lookup abc.csv uri_path OUTPUT serviceName apiName I am using above query to lookup from csv file but not getting any results. In this lookup file i have these fields. apiName is the unique name in this csv file which i am trying to link with the uri_path but not able to do so. Is there a way to match this and produce result with both uri_path and api_name? can anyone please help me on this? serviceName uri_path http_method apiName /google /page1/page1a/633243463476/googlep1 post postusingRRR
I'm seeing errors in a search.log related to loadjob command and artifact replication occasionally failing for a report. There are two loadjob commands used in the scheduled report, Job-1 one fails w... See more...
I'm seeing errors in a search.log related to loadjob command and artifact replication occasionally failing for a report. There are two loadjob commands used in the scheduled report, Job-1 one fails when replication is needed and Job-2 replicates just fine. I have also switched the the loadjob command is called and have the same experience.  When replication isn't needed, the  loadjob for Job-1 does not error out and the report runs as expected. I'm not sure what to look at next. I replaced the ID's for readability.   12-14-2022 17:00:11.945 INFO SearchOperator:loadjob [55258 phase_1] - triggering artifact replication uri=https://127.0.0.1:8089/services/search/jobs/scheduler_<ID>/proxy?output_mode=json, uri_path=/services/search/jobs/scheduler_<ID>/proxy?output_mode=json 12-14-2022 17:00:12.396 ERROR HttpClientRequest [55258 phase_1] - Caught exception while parsing HTTP reply: String value too long. valueSize=524552, maxValueSize=524288 12-14-2022 17:00:12.396 ERROR SearchOperator:loadjob [55258 phase_1] - error accessing https://127.0.0.1:8089/services/search/jobs/scheduler_<ID>/proxy?output_mode=json, statusCode=502, description=OK
Hi Friends, I'm configuring mpstats command to get the each cpu core ideal value.  I have configured below in bin folder: cpucore_mpstat.sh mpstat -P ALL Input.cong: # This script will co... See more...
Hi Friends, I'm configuring mpstats command to get the each cpu core ideal value.  I have configured below in bin folder: cpucore_mpstat.sh mpstat -P ALL Input.cong: # This script will collect cpu utilization per core from mpstat command [script://./bin/cpucore_mpstat.sh] disabled = false interval = 120 source = server sourcetype = cpucore_mpstat index = pg_idx_whse_prod_events _meta = entity_type::NIX service_name::WHSE environment::PROD I can see the events like below:  I want red box highlighted column needs to display in dashboard.  Kindly help on this how to achieve this.   
hello guys, Is there any way that I could remove duplicate events that have same timestamp using this below search string:   index=* (EventCode=4624 OR EventCode=4625) | stats count(Keyword... See more...
hello guys, Is there any way that I could remove duplicate events that have same timestamp using this below search string:   index=* (EventCode=4624 OR EventCode=4625) | stats count(Keywords) as Attempts, count(eval(match(Keywords,"Audit Failure"))) as Failed, count(eval(match(Keywords,"Audit Success"))) as Success earliest(_time) as FirstAttempt latest(_time) as LatestAttempt by Account_Name | where Attempts>=5 AND Success>=1 AND Failed>=2 | eval FirstAttempt=strftime(FirstAttempt,"%x %X") | eval LatestAttempt=strftime(LatestAttempt,"%x %X")   The output: Account_Name Attempts Failed Success FirstAttempt LatestAttempt    
Splunk  is triggering System, WMI Provider process and causing lot of network traffic  at Start up. Disabling the Splunk service significantly improved the performance drastically. When Splunk is ena... See more...
Splunk  is triggering System, WMI Provider process and causing lot of network traffic  at Start up. Disabling the Splunk service significantly improved the performance drastically. When Splunk is enabled , the CPU stands at 100 % for 30 mins at boot, after disabling Splunk the CPU was at 100% only for 6 mins as then it dropped to normal usage. How can we fix this issue
Is there a filter or can I create one so that I can quickly see all Incidents owned by me?