All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunk TEAM, I have a question. How Can I avoid or delete duplicated data in my index when the REST API does a GET request? Thanks all.
Hi All, does splunk log the elapsed time automatically ? I am trying to join few different source types in splunk that are joined by unique correlationid id logged and flowing between them , I don't ... See more...
Hi All, does splunk log the elapsed time automatically ? I am trying to join few different source types in splunk that are joined by unique correlationid id logged and flowing between them , I don't see elapsed times on all calls. I want to compute the latency between different calls for unique ids flowing between them, splunk time stamp is not accurate, what is expected here ? do I need to ask my feature teams to log elapsed time in splunk ? is this not automatically logged ? I don't see this logged for many source types in splunk as is.
Hello Spunk team, I want to use REST API Modular Input but I am getting this error: 6-01-2020 16:45:56.588 -0500 ERROR ExecProcessor - message from "/opt/splunk/bin/python2.7 /opt/splunk/etc/app... See more...
Hello Spunk team, I want to use REST API Modular Input but I am getting this error: 6-01-2020 16:45:56.588 -0500 ERROR ExecProcessor - message from "/opt/splunk/bin/python2.7 /opt/splunk/etc/apps/rest_ta/bin/rest.py" FATAL Trial Activation key for App 'REST API Modular Input' failed. Please ensure that you copy/pasted the key correctly. The key is correct. I try to use different keys but I have the same error. Thanks all.
Hi, I have a query such as below. index = abc* host=efg* |stats latest(_time) as latest by host | eval Status = case (latest <= relative_time(now(),"-15m") AND latest > relative_time(now(),"... See more...
Hi, I have a query such as below. index = abc* host=efg* |stats latest(_time) as latest by host | eval Status = case (latest <= relative_time(now(),"-15m") AND latest > relative_time(now(),"-30m")," smiley1 ", latest <= relative_time(now(),"-30m"),"smilye2 ",true(),"smiley3" ) |eval Last_Updated_Time = strftime(latest,"%c") I now wanted to view the output in either single value chart or in trellis mode with the host name,smiley and the Last_Updated_Time below the icon. I do not have access to use to css or js script. Suggestions pls Thanks in advance.
I signed up for the Splunk Cloud free trial. It says I have an instance but when I try to access it, I get an error. How do I try out Splunk Cloud?
I have configured multiple Data Inputs, pointing at folders such as /mnt/DataInput1 etc. There is a lot of noise so tried following the following links to add a blacklist to the inputs.conf for the ... See more...
I have configured multiple Data Inputs, pointing at folders such as /mnt/DataInput1 etc. There is a lot of noise so tried following the following links to add a blacklist to the inputs.conf for the input, to restrict junk data such as Level=INFO type linux data. https://docs.splunk.com/Documentation/Splunk/latest/Data/Whitelistorblacklistspecificincomingdata?r=searchtip Example input: [monitor:///mnt/blob/XXXXXXXXXX/logs] disabled = false index = customer_XXXX_XXXXXXXX blacklist = Level="(INFO)" Unfortunately after several tries, and after making a change, restarting Splunk to see the change, then waiting several hours for the Data Inputs page to queue up the number of files, it still doesn't work. Can anyone please shed some insight into what I'm doing wrong please? Ultimately I'd like to do something like: blacklist = Level="(INFO)"|coderef="(salt*)|"consul)" Where as you can see above, I want to blacklist =different event types. Help?
Hi All, One of the search head members in the search head cluster has a message: "Local KV Store has replication issues. See introspection data and mongod.log for details. Local instance has stat... See more...
Hi All, One of the search head members in the search head cluster has a message: "Local KV Store has replication issues. See introspection data and mongod.log for details. Local instance has state Recovering..". What can I do to fix the issue? When I checked with the kvstore status command for this particular SH member, the status is shown as recovering. On using the resync command, even then the issue still exists. Can you please let me know what steps should be followed to rectify the issue. Will there be any impact on the performance of search heads?
Hi all, Upon a recent upgrade to Splunk 8.0.4, I started seeing this error message when running a subsearch against a metric index using the mstats command: StatsFileReader file open failed fi... See more...
Hi all, Upon a recent upgrade to Splunk 8.0.4, I started seeing this error message when running a subsearch against a metric index using the mstats command: StatsFileReader file open failed file=G:\Program Files\Splunk\var\run\splunk\dispatch\subsearch_1591031620.129695_967FF3A8-E48E-44F1-9399-E958A1F07906_1591031649.2\statstmp_96.sb.lz4 When I checked the folder \Program Files\Splunk\var\run\splunk\dispatch\subsearch_1591031620.129695_967FF3A8-E48E-44F1-9399-E958A1F07906_1591031649.2 on the search head executing the search, the file does not seem to exist. I have tried researching the issue to no avail. Has anyone seen this issue yet, and if so, is there a solution to the issue? Thanks in advance!
I am trying to set up an alert that runs a script after finding a result. For some reason, we see this error each time we try to run the script: 06-01-2020 13:20:09.091 -0500 ERROR ModularUtility ... See more...
I am trying to set up an alert that runs a script after finding a result. For some reason, we see this error each time we try to run the script: 06-01-2020 13:20:09.091 -0500 ERROR ModularUtility - Specified filename "/opt/splunk/etc/apps/TA-S3Deleter/bin/s3_file_deleter.py" not found in search path. 06-01-2020 13:20:09.091 -0500 ERROR sendmodalert - action=s3_file_deleter - Failed to find alert.execute.cmd "/opt/splunk/etc/apps/TA-S3Deleter/bin/s3_file_deleter.py". Here is how the alert_actions.conf is set up: [s3_file_deleter] is_custom = 1 label = S3 File Deleter description = This action passes along a value in filePath to a python script that will delete a file in an S3 bucket. payload_format = json alert.execute.cmd = /opt/splunk/etc/apps/TA-S3Deleter/bin/s3_file_deleter.py The script definitely exists in that directory. I've reviewed a lot of the documentation on this, and there is no good example for simply running a python script. Any insight would be greatly appreciated. Thanks.
how can we get Splunk license % usage data over long period of time? The following query only gives us last 2 months of data: index=_internal source="license_usage.log" type=usage idx="" | eval ... See more...
how can we get Splunk license % usage data over long period of time? The following query only gives us last 2 months of data: index=_internal source="license_usage.log" type=usage idx="" | eval MB = round(b/1024/1024,2) | timechart span=1d sum(MB) by idx | addtotals
Hello all, We are connecting to our McAfee database using the McAfee Add-on 2.2.1 and DBConnect 3.3.1. The search reads perfectly; however, the McAfee database timestamps are in UTC time. On the ... See more...
Hello all, We are connecting to our McAfee database using the McAfee Add-on 2.2.1 and DBConnect 3.3.1. The search reads perfectly; however, the McAfee database timestamps are in UTC time. On the database connection, we have defined our timezone as Canada/Eastern: -04:00. However, the timestamps are still shown in UTC time. I was expecting the timestamps to be converted by DB Connect to the actual timezone where they're in. Is there any way to do that? Btw I've tried the solutions in answer 612262 and 620601 to no avail: Setting correct timezone for mcafee logs in dbconnect: https://answers.splunk.com/answers/612262/index.html How can I override the timezone for Splunk DBX 3.1?: https://answers.splunk.com/answers/620601/index.html Thanks all, Pablo
Hi Splunkers, Please guide us on the requirement below: Input: server, env, req no, input field,status host-1,PROD,1666680,mobile1,Deployment_Successful host-1,PROD,1666680,mobile2,Deployment_... See more...
Hi Splunkers, Please guide us on the requirement below: Input: server, env, req no, input field,status host-1,PROD,1666680,mobile1,Deployment_Successful host-1,PROD,1666680,mobile2,Deployment_failed host-1,PROD,1666680,mobile3,exception host-1,PROD,1666001,mobile1,Deployment_Successful host-1,PROD,1666601,mobile2,Deployment_failed host-1,PROD,16666801,mobile3,exception Expected output: Pie chart with status count My trial: sourcetype=sourcetype1 source=*.log | rex field=_raw "(?\w+\-\d+)\,(?\w+\/\w+)\,(?\d+)\,(?\w+)\,,(?\w+.*)" | stats count by Status The above search is not showing the count if the log has different statuses. Kindly help to guide on this.
Hi, Can anyone help me with the steps to create a user with the Administer User's permission as I want to access REST API and I read that for accessing RBAC APIs we need to have either account own... See more...
Hi, Can anyone help me with the steps to create a user with the Administer User's permission as I want to access REST API and I read that for accessing RBAC APIs we need to have either account owner or Administer Users privilege? Also if I can be provided with the difference between Account Owner and User with Administer Users privileges, that would be of great help. Thanks.
Hi, I am looking to upgrade multiple universal forwarders installed on Linux OS at one go. Could you please help me with the script I should use and the detailed steps on how to use that script? ... See more...
Hi, I am looking to upgrade multiple universal forwarders installed on Linux OS at one go. Could you please help me with the script I should use and the detailed steps on how to use that script? Note: I have a standalone Splunk indexer.
I have installed Splunk on my office PC and I got a message from an IT engineer saying the following: "We were alerted to unusual behavior from Splunkd on your machine. It attempted to scrape memor... See more...
I have installed Splunk on my office PC and I got a message from an IT engineer saying the following: "We were alerted to unusual behavior from Splunkd on your machine. It attempted to scrape memory via LSASS and as such was terminated. Is this normal behavior for this application?" Please let me know about this, otherwise I may have to remove Splunk from PC. What I should know about this?
Hello, I have been having trouble onboarding some logs that have some extra data at the top and are not breaking into individual events. I would like to remove the first 7 lines (I tried SEDCMD... See more...
Hello, I have been having trouble onboarding some logs that have some extra data at the top and are not breaking into individual events. I would like to remove the first 7 lines (I tried SEDCMD in props) and then break the following into individual events that start with "CEF:0". Any help would be appreciated. Sample log that came in as 1 event. accountId:1111111 configId:1111 checksum:fffffffffffffffffffffff format:CEF startTime:1591023419998 endTime:1591023786052 |==| CEF:0|Incapsula|SIEMintegration|1|1|Normal|0| fileId=763000040111111111 sourceServiceName=site.site.com siteid=41611111 suid=1111111 requestClientApplication=Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/) deviceFacility=ams cs2=false cs2Label=Javascript Support cs3=false cs3Label=Support cs1=NA cs1Label=Cap Support cs4=bf0e3ba9-cad7-42e3-917d-ffffffffffff cs4Label=VID cs5=a069314a28fc3f38df1a7fd08797ff70400c236c3f43c214a588d2c6b92fada93f21b37a01969be556f0370e4534fbd14969aa1f882f5680157c6c2cf9ffffff cs5Label=clappsig dproc=Unclassified cs6=Bot cs6Label=clapp ccode=DE cs7=51.2993 cs7Label=latitude cs8=9.491 cs8Label=longitude Customer=company start=1591023257044 request=site.site.com/products/ requestMethod=GET qstr=offset=70&max\ cn1=200 app=HTTPS act=REQ_PASSED deviceExternalId=107081971111111111 sip=x.x.x.x spt=443 in=6214 xff=x.x.x.x cpt=28286 src=x.x.x.x ver=TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 end=1591023257493 CEF:0|Incapsula|SIEMintegration|1|1|Normal|0| fileId=763000040111111111 sourceServiceName=site.site.com siteid=11111111 suid=1111111 requestClientApplication=Mozilla/5.0 (compatible; MJ12bot/v1.4.8; http://mj12bot.com/) deviceFacility=ams cs2=false cs2Label=Javascript Support cs3=false cs3Label=Support cs1=NA cs1Label=Cap Support cs4=bf0e3ba9-cad7-42e3-917d-ffffffffffff cs4Label=VID cs5=a069314a28fc3f38df1a7fd08797ff70400c236c3f43c214a588d2c6b92fada93f21b37a01969be556f0370e453411111111111f882f5680157c6c2cf9ac15cc cs5Label=clappsig dproc=Unclassified cs6=Bot cs6Label=clapp ccode=DE cs7=51.2993 cs7Label=latitude cs8=9.491 cs8Label=longitude Customer=company start=1591023260718 request=site.site.com/ requestMethod=GET qstr=offset=70&max cn1=302 app=HTTPS act=REQ_PASSED deviceExternalId=107082561111111111 sip=x.x.x.x spt=443 in=368 xff=x.x.x.x cpt=28286 src=x.x.x.x ver=TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 end=1591023260838
Actual requirement is when a status field value changes from one to another, an alert needs to be triggered. Below are the status field values: Extended recovery Investigation suspended False-pos... See more...
Actual requirement is when a status field value changes from one to another, an alert needs to be triggered. Below are the status field values: Extended recovery Investigation suspended False-positive Investigating Service degradation Service restored Restoring service Post-incident report published Ex: If status field value changes from "false-positive" to "investigating" then alert should be triggered. If field value changes from "false-positive" to "false-positive" then no alert should be triggered.
Hello, When using timechart without a BY this works. index IN (idx) AND host IN (server) AND source IN (ssl_access_log) AND sourcetype=access_combined AND method IN (GET,POST) ... See more...
Hello, When using timechart without a BY this works. index IN (idx) AND host IN (server) AND source IN (ssl_access_log) AND sourcetype=access_combined AND method IN (GET,POST) AND file="confirm.jsp" AND date_hour>=6 AND date_hour<=22 latest=+1d@d | eval certsFiled=case(file="confirm.jsp","1") | timechart count span=2min | timewrap d series=short | where _time >= relative_time(now(), "@d+6h+55min") AND _time <= relative_time(now(), "@d+22h") | eval colname0 = strftime(relative_time(now(), "@d"),"%D-%a") | eval colname1 = strftime(relative_time(now(), "-d@d"), "%D-%a") | eval colname2 = strftime(relative_time(now(), "-2d@d"), "%D-%a") | eval {colname0} = s0 | eval {colname1} = s1 | eval {colname2} = s2 | fields - s* col* However, once adding the BY clause, the logic no longer works. index IN (idx) sourcetype IN (ssl_access_log) AND date_hour>=17 AND date_hour<=20 Exception OR MQException earliest=-7d@d latest=+1d@d | rex "\s(?<exception>[a-zA-Z\.]+Exception)[:\s]" | search exception=* | eval exception=case(exception="MQException","mqX", exception="com.ibm.mq.MQException","mqXibm") | timechart count span=1m BY exception | timewrap d series=short | where _time >= relative_time(now(), "@d+17h") AND _time <= relative_time(now(), "@d+20h") | eval colname0 = strftime(relative_time(now(), "@d"),"%D-%a") | eval colname1 = strftime(relative_time(now(), "-d@d"), "%D-%a") | eval colname2 = strftime(relative_time(now(), "-2d@d"), "%D-%a") | eval {colname0} = s0 | eval {colname1} = s1 | eval {colname2} = s2 | fields - s* col* This includes many more days (colname) and exceptions (removed for brevity). UPDATE: Here is the chart without renaming. Instead of ibmMqExcpttn_s7 it should read Mon 5/25/20 ibmMqExcptn. _s6 would beTue; _s5 would be Wed; etc. Thanks and God bless, Genesius
I have reviewed similar questions but haven't found a fix to this. My windows UF is utilizing high memory and processes, causing the servers to become inaccessible. I'm having too many powershell s... See more...
I have reviewed similar questions but haven't found a fix to this. My windows UF is utilizing high memory and processes, causing the servers to become inaccessible. I'm having too many powershell scripts launched, although I have disabled all the PS scripts in the deployed TA_windows . Is there anywhere else in Splunk I need to look for PS Scripts that may be causing this problem?
Hi all, I am not able to extract the below-given value from the JSON file fields are "initiator": test_abce, "releasenumber":0.0.11, "source": "test420", "deployenv": testppt... See more...
Hi all, I am not able to extract the below-given value from the JSON file fields are "initiator": test_abce, "releasenumber":0.0.11, "source": "test420", "deployenv": testppt, "ServiceMD": app "ServiceManager": app1 "ServiceLead": app2 I I am attaching the screenshot for more information there is space i thing that is causing the issue i tried to add the field name also in my query like source name......... releasenumber source |table releasenumber source deployenv ServiceMD ServiceManager Am not able to list the data which all are in yellow color JSON File. { "Pirid" : 7965, "url": "https://connects.test.com/home", "id": 88, "level": "202", "guideline": "2.4", "help": { "description": "testid" }, "pagestest1" : 0, "pagestest2": 0, "pagestest3": 1, "principle": 2, "severity": "review", "success_criterion": "2.4.10", "initiator": test_abce, "releasenumber":0.0.11, "source": "test420", "deployenv": testppt, "ServiceMD": app "ServiceManager": app1 "ServiceLead": app2 "success_criterion_title": "Section", "more_details": "https://api.test", "ApplicationNm": "Connects", "_links": { "pages": { "href": "https://api/pages" }, "progress": { "history": { "pages": "https://api./history" } } } }