All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Luke,  I know the question my be self-answered but I just thought I post it here anyways. The Lookup File Editor app version 3.4.2 seems to be available for Splunk Cloud and actually seems to wo... See more...
Hi Luke,  I know the question my be self-answered but I just thought I post it here anyways. The Lookup File Editor app version 3.4.2 seems to be available for Splunk Cloud and actually seems to work for some users, but we found not to be responsive for some others where users cannot save or create new lookups, etc. Before we have a 3.3.x version of it and it was working flawless.  Is it that the 3.4.2 or newer is still not working properly for Splunk Cloud?
Hi Splunkers, I've looked through the docs section but found very little info for using the DC Connect App to get IBM TSM/Spectrum Protect (db2) database info into Splunk. In the lab environment wit... See more...
Hi Splunkers, I've looked through the docs section but found very little info for using the DC Connect App to get IBM TSM/Spectrum Protect (db2) database info into Splunk. In the lab environment with full-access it works, but for production we only get limited acces to the database, which should be enough to query some tables. Only when trying to define a new input, the gui throws me a 'cannot get schemas' error. I know I need the 'TSMDB1' schema, so I don't need the dropdownbox to be populated with other system schema's. I thought about manually editting the conf files, but that's not how our regular procedures work and there is some risk I might miss some check or warnings and it's an extra pain maintaining. We have DB Connect 3.2.0 with driver version 4.23, TSM server version is 8.1.7 The dbx error in splunk log is SQLCode 551, SQLState 42501 Execute SYSIBM.SQLTABLES, Unable to get schemas metadata  
I am trying to create a PDF and CDF with limited amount of buckets: ... | bucket bins=10 fs as fsBinned | stats count(fs) as pdf by fsBinned | sort fsBinned | accum pdf as cdf | eventstats max(c... See more...
I am trying to create a PDF and CDF with limited amount of buckets: ... | bucket bins=10 fs as fsBinned | stats count(fs) as pdf by fsBinned | sort fsBinned | accum pdf as cdf | eventstats max(cdf) as totalMax | eval fsPDF = 10*pdf/totalMax | eval fsCDF = cdf/totalMax | table fsBinned, fsPDF, fsCDF However, the fsBinned variable does not chage if I decrease the number of bins. The table shows that the resulting buckets are mixed together with the input fs variable, like this: 0, 0-100, 14, 15... Is this a normal behavior or is this a bug? I am using Splunk 7.2.10
Hi we have an Android setup where we distinguish between production and test builds through the use of Android  Flavors. We also have two separate accounts at AppDynamics for Production and Test. How... See more...
Hi we have an Android setup where we distinguish between production and test builds through the use of Android  Flavors. We also have two separate accounts at AppDynamics for Production and Test. How can I provide different set of credentials when everything seems to be done at compile time in a single build.gradle config setting like below? adeum { account { name 'ourAccount' licenseKey '11111111-1111-1111-111111111111' } } Is it possible to set the credentials runtime instead? Or parameterize this somehow dependent on flavors? Something like adeum { account("test") { name 'ourTestAccount' licenseKey '11111111-1111-1111-111111111111' } } adeum { account("prod") { name 'ourProdAccount' licenseKey '22222222-2222-2222-222222222222' } }
Hello Splunk Community, We have two types of logs being forwarded to splunk a simple .log file and json logs that are being forwarded to splunk.   I am only interested in one of the objects which h... See more...
Hello Splunk Community, We have two types of logs being forwarded to splunk a simple .log file and json logs that are being forwarded to splunk.   I am only interested in one of the objects which has key-value pairs.  In this example I am only interested in the log object.   JSON LOG   { [-] kubernetes: { [+] } log: 2020-06-24T13:23:12.8735410Z CI=4fomit248-2e46-4omit9-8019-838cdac1a4a4 L=INFO This is some log message here HRM=GET HRU=http://00.00.000.00:80/bar/v1/foo IP=::ffff:00.00.000.000 AV=? HSC=200 ET=1 stream: stdout time: 2020-06-24T13:23:12.873853339Z }   In the log object I want the fields to be extracted as followed: Key Value CI 4fomit248-2e46-4omit9-8019-838cdac1a4a4 L INFO IP ::ffff:00.00.000.000 <THIS WOULD BE THE LOG MESSAGE NOT A KEY> This is some log message here   I understand how to parse fields from the spath output using regex. However I would prefer this is is parsed at index time. Our other set of logs are exactly what is in the log object they are not in JSON format and splunk picks up the fields just fine. Log from .log file   2020-06-24 06:41:31.195 ST=C5D17Domitted72738B0D136DA9 CI=b1d0b050-omitted-46d2-omitted-80a61dfadf7d L=INFO Some log message here HRM=GET SN=FOO MN=Get HRU=http://foo.omit/bar/v2/foobar IP=00.00.000.000 ET=31 HSC=200 FOWCF=4   Is it possible to extract the log object at index time and turn it into its own log where the key value pairs are extracted as fields? I also read this blog post is this the best approach? Eureka! Extracting key-value pairs from JSON fields 
I am deploying new Splunk environment using ansible-splunk git repository. (https://github.com/splunk/splunk-ansible) I have installed search head cluster consisting of 3 servers and indexer cluster... See more...
I am deploying new Splunk environment using ansible-splunk git repository. (https://github.com/splunk/splunk-ansible) I have installed search head cluster consisting of 3 servers and indexer cluster also 3 servers and one master, it's all been deployed with the ansible and working fine. Now I need to connect search head cluster to indexer cluster and can't seem to find any option in splunk-ansible on how to do it. I have done it manually using the command  ./splunk edit cluster-config -mode searchhead -master_uri https://10.10.10.10:8089 -secret splunk1234 How can I achieve it using splunk-ansible ?
Hi All, I'm banging my head against a wall attempting to figure out why a SEDCMD inside of a props.conf on a UF isn't wanting to strip out the value I tell it to. We are wanting to strip out a hashe... See more...
Hi All, I'm banging my head against a wall attempting to figure out why a SEDCMD inside of a props.conf on a UF isn't wanting to strip out the value I tell it to. We are wanting to strip out a hashed value from a log that is inside of a bracket (example below), as well as the brackets, with the SEDCMD. I am able to successfully test this command inside of the searchhead, but when I place it inside of the props.conf on the UF, I don't see it successfully implemented. I'm sure I'm missing something pretty simple. I've tried quite a few variations of this and no luck. Could anyone help me or possibly give me a hint as to what I could be doing wrong? Thank you all. | rex mode=sed field=_raw "s/\[ecid: .+?\]//g" [log4j] SEDCMD-random=s/\[ecid: .+?\]//g Sourcetype: log4j [2020-06-24T10:02:08.590-04:00] [Server] [NOTIFICATION] [] [] [tid: 394025] [userId: <anonymous>] [ecid: 3956b675-4930-42d5-9e7d-94ca9013d2ea-0037ac42,0:26:74:38:2010:52:52:71:38] [APP: oraclediagent2] [partition-name: DOMAIN] [tenant-name: GLOBAL] [oracle.odi.runtime.MrepExtId: 38392028449]
Hello Experts, I am new to Splunk and trying to build basic queries in Splunk to build use cases. Currently I am working on a query-use case where in I want to list down the users for whom a success... See more...
Hello Experts, I am new to Splunk and trying to build basic queries in Splunk to build use cases. Currently I am working on a query-use case where in I want to list down the users for whom a successful attempt is observed after multiple unsuccessful attempts. For example, after 5 unsuccessful attempt (event code 4625) one successful attempt (event code 4624) is observed. Any suggestion to achieve this use case will be appreciated. Thanks in advance. Parth
Hello Splunk Experts! I have a string like below rex " - - (?<text>foo|bar) " | ..... I want to take the text when a word match foo or bar. The string include whitespace as above. Thank in advance!
hey guys, am i guaranteed to get splunk software updates if i have a perpetual license? (bought several years ago) i don't quite follow, but i heard there were some changes in splunk's business mod... See more...
hey guys, am i guaranteed to get splunk software updates if i have a perpetual license? (bought several years ago) i don't quite follow, but i heard there were some changes in splunk's business model or smth thanks in advance, rashid
Hello, We are trying to send log data from PCF applications over to our Splunk indexers via TCP. This was configured and working fine on our old servers, but our new servers are throwing messages li... See more...
Hello, We are trying to send log data from PCF applications over to our Splunk indexers via TCP. This was configured and working fine on our old servers, but our new servers are throwing messages like: 06-24-2020 08:22:12.449 -0400 ERROR TcpInputProc - Message rejected. Received unexpected message of size=875640864 bytes from src=10.198.52.102:46154 in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload. We tried setting it up using the steps outlined here: https://docs.cloudfoundry.org/devguide/services/integratsplue-splunk.html#config by copying over the rfc5424 installation from our old servers to the new servers. The link to the Splunk documentation does not appear to work anymore, but I was not able to find additional information about how to integrate w/ PCF with a quick search in Splunk. We have the ports configured in our inputs.conf file like this: [tcp://6001] connection_host = dns index = test sourcetype = rfc5424_syslog But we continue to see these error messages. Is there some additional configuration we should be checking to make sure it is configured to accept this TCP data correctly? Thanks in advance for any assistance.
I have been trying to put together a dashboard for my clients that shows their usage.  The search I am still having issues with is for Average Daily ingest by index over 30 days.  I want it to show j... See more...
I have been trying to put together a dashboard for my clients that shows their usage.  The search I am still having issues with is for Average Daily ingest by index over 30 days.  I want it to show just the average number.  I have found searches that get me a distance, but none are specifically what I need.
Hi there, How can I stop all-time searches from running in the environment either via users or dashboards? Running 7.3.3 so cant take advantage of WLM monitoring yet. Can this be done with search re... See more...
Hi there, How can I stop all-time searches from running in the environment either via users or dashboards? Running 7.3.3 so cant take advantage of WLM monitoring yet. Can this be done with search restrictions and roles, in the sense that the search just wont run? Thanks!
Hello,   Is there any way, I can check the sequence of logs. For example: I have following list of msg  1. "Message arrived!"  at 2020-06-22 17:19:28.032  2. "Message Received" at2020-06-22 17:1... See more...
Hello,   Is there any way, I can check the sequence of logs. For example: I have following list of msg  1. "Message arrived!"  at 2020-06-22 17:19:28.032  2. "Message Received" at2020-06-22 17:19:28.047 3. "Start import" at 2020-06-22 17:19:28.047 4. "Finished with number of records 123 " at 2020-06-22 17:19:28.215 I want to create chart if for these different messages and also the number of records. Thanks,
Hello , I'm working on SAI ( Splunk App of infrastructure ) !  * Based on Single Instance  ( Windows Os )  * I configure at first the port ( 9997 ) , installing SAI with the Add-On  * Launch the s... See more...
Hello , I'm working on SAI ( Splunk App of infrastructure ) !  * Based on Single Instance  ( Windows Os )  * I configure at first the port ( 9997 ) , installing SAI with the Add-On  * Launch the script mentionned on the steps after checking the list the metrics and logs that i need it ==> Splunk UF is installed , inputs.conf is configured  ==> but on the web interface canno't show the entity !!   * When i check the event on Splunk , i can find the logs that i checked when i set up the "ADD DATA AND lAUNCH THE SCRIPT"  [ Security , Application ..] Please i need your help because i'm blocked from a long time !! i follow a lot of tuto nut still cannot resolve the issue !! THANKS IN ADVANCE  
When using syslog through tcp (instead of udp) from imperva, many packages are lost since they do not arrive in order. So at destination everything is screwed up since they can not be reassembled
I'm trying to use the CIM to look at some firewall data in the InfoSec app. I've setup the event tagging and field aliases but I'm getting different results when searching using the Datamodel in the ... See more...
I'm trying to use the CIM to look at some firewall data in the InfoSec app. I've setup the event tagging and field aliases but I'm getting different results when searching using the Datamodel in the InfoSec dashboard vs using a base search. Here are the two searches I'm using, anyone know why they are providing different results? Datamodel: | tstats prestats=true local=false summariesonly=true allow_old_summaries=true count from datamodel=Network_Traffic.All_Traffic where * by _time, All_Traffic.action span=10m  | timechart minspan=10m count by All_Traffic.action Base: tag=communicate tag=network | stats count by _time,action | timechart minspan=10m count by action Thanks
How do I use AppDynamics with ServiceNow? But I am not able to find the requirements AppDynamics-ServiceNow Data Sync. How can I get the downloads?
Hi Splunkers, I have some issues creating a time chart from the below report. Pie chart looks great but for some reason I can't get a time chart working:   | eval start_time_epoch = strptime(repor... See more...
Hi Splunkers, I have some issues creating a time chart from the below report. Pie chart looks great but for some reason I can't get a time chart working:   | eval start_time_epoch = strptime(reported_time,"%b %d %Y %H:%M:%S") | eval start_time_second = strftime(start_time_epoch,"%S") | eval start_time_epoch_rounded = start_time_epoch - start_time_second - 5*3600 | fields - start_time_epoch, start_time_second | eval close_time_epoch = strptime(processed_time,"%b %d %Y %H:%M:%S") | eval close_time_second = strftime(close_time_epoch,"%S") | eval close_time_epoch_rounded = close_time_epoch - close_time_second - 5*3600 | fields - close_time_epoch, close_time_second | eval minute = mvrange(0, (close_time_epoch_rounded - start_time_epoch_rounded), 60) | mvexpand minute | eval _time = start_time_epoch_rounded + minute | eval myHour = strftime(_time,"%H") | eval myMinute = strftime(_time,"%M") | eval myDay = strftime(_time,"%A") | where myDay != "Saturday" AND myDay != "Sunday" AND (myHour >= 8 AND myHour < 20) | eval ReportedTime = strftime(start_time_epoch_rounded,"%b %d %Y %H:%M:%S") | eval ProcessedTime = strftime(close_time_epoch_rounded,"%b %d %Y %H:%M:%S") | stats count as durationInMinutes by ticket,ReportedTime,ProcessedTime | eval duration = tostring(durationInMinutes*60, "duration") | eval SLO=if(durationInMinutes>60,"SLO Fail","SLO Achieved") | table ticket,ReportedTime,ProcessedTime,duration,SLO | chart count by SLO   This is how the pie chart looks like: I'd like to get a timechart showing a line for the missed SLOs against the total. If there are no SLOs missed then the line should be at 100%. This is what I tried and it's not working:   | eval start_time_epoch = strptime(reported_time,"%b %d %Y %H:%M:%S") | eval start_time_second = strftime(start_time_epoch,"%S") | eval start_time_epoch_rounded = start_time_epoch - start_time_second - 5*3600 | fields - start_time_epoch, start_time_second | eval close_time_epoch = strptime(processed_time,"%b %d %Y %H:%M:%S") | eval close_time_second = strftime(close_time_epoch,"%S") | eval close_time_epoch_rounded = close_time_epoch - close_time_second - 5*3600 | fields - close_time_epoch, close_time_second | eval minute = mvrange(0, (close_time_epoch_rounded - start_time_epoch_rounded), 60) | mvexpand minute | eval _time = start_time_epoch_rounded + minute | eval myHour = strftime(_time,"%H") | eval myMinute = strftime(_time,"%M") | eval myDay = strftime(_time,"%A") | where myDay != "Saturday" AND myDay != "Sunday" AND (myHour >= 8 AND myHour < 20) | eval ReportedTime = strftime(start_time_epoch_rounded,"%b %d %Y %H:%M:%S") | eval ProcessedTime = strftime(close_time_epoch_rounded,"%b %d %Y %H:%M:%S") | stats count as durationInMinutes by ticket,ReportedTime,ProcessedTime | eval duration = tostring(durationInMinutes*60, "duration") | eval SLO=if(durationInMinutes>60,"SLO Fail","SLO Achieved") | table ticket,ReportedTime,ProcessedTime,duration,SLO | timechart count(SLO), count(SLO Fail) as Missed | eval Percent=round(Missed*100/SLO,2)   Any ideas what's missing? thank you!
Hello, I was looking for a log that indicates when a Windows Server needs a reboot. Previously there was a Windows Update log that I could use to get this data but with Windows 10/2016 you have to ... See more...
Hello, I was looking for a log that indicates when a Windows Server needs a reboot. Previously there was a Windows Update log that I could use to get this data but with Windows 10/2016 you have to run a powershell to get the same info. Perhaps there is something in the Event Viewer logs I could be using. I'm hoping there is something I have overlooked. I'm also using Tines for automation and the best that I've come up with so far is to run a powershell script and import that into Tines. Would prefer to take my feed from Splunk but you can let me know if you think of any way around this. Regards, David