All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We installed Splunk App for Jenkins. All dashboards work fine except 1 dashboard: "admin -> Jenkins Nodes" (please see sample in attached). The first panel in this dashboard only works for admin use... See more...
We installed Splunk App for Jenkins. All dashboards work fine except 1 dashboard: "admin -> Jenkins Nodes" (please see sample in attached). The first panel in this dashboard only works for admin user. non-admin user, which has permission to this app, will see a blank table with round icon. I already checked the permission settings in this App, but didn't find any mistake. I'd appreciate if anybody can provide any advise. Thank you!
Hi Splunk Community! I am completely new to Splunk and its configuration, so please excuse any lack of knowledge. I am trying to connect up a new custom app to our Splunk Cloud instance, however it... See more...
Hi Splunk Community! I am completely new to Splunk and its configuration, so please excuse any lack of knowledge. I am trying to connect up a new custom app to our Splunk Cloud instance, however it is not showing up in the GUI. The end goal is to have data coming in from a universal forwarder on a Windows server to a Centos heavy forwarder, which then passes up the data to Splunk Cloud. A previous employee has successfully configured other custom apps, so I'm trying to follow the config. On the heavy forwarder, I have /opt/splunk/etc/apps/<custom app>/ directory with appserver, default, local, logs and metadata folders, copied from a sample app on the system. In both default and local folders, I have inputs.conf... ###################### [splunktcp://8089]connection_host = ip ###################### ...since the Windows universal forwarder is configured to send to this heavy forwarder on port 8089. I then edited /opt/splunk/etc/system/local/serverclass.conf to add this line... ###################### [serverClass:All Servers:app:<custom app>] restartSplunkWeb = 1 restartSplunkd = 1 stateOnClient = enabled ###################### ...and issued "splunk reload deploy-server". However, I am not seeing the app turn up on Splunk Cloud. I would really appreciate any push in the right direction as to what I have not configured correctly. Thank you!
Hello everyone, I have my fields like below, indicator tags indicator 1 tag 1,class:234 indicator 2 tagg,class:456 I have to group my fields based on tags starting with class, and m... See more...
Hello everyone, I have my fields like below, indicator tags indicator 1 tag 1,class:234 indicator 2 tagg,class:456 I have to group my fields based on tags starting with class, and my query is like below, sourcetype="my-data" |stats count by tags|where tags="class*" But I am getting 0 results, as where class is taking only exact values and not "class*" I want my result as below, class:234 1 class:456 1 Kindly suggest.
Followed this guide properly but not getting any Falcon Indicator events in Splunk and getting the following message in log file     2020-10-16 14:37:04,341 INFO pid=488 tid=MainThread file=splunk... See more...
Followed this guide properly but not getting any Falcon Indicator events in Splunk and getting the following message in log file     2020-10-16 14:37:04,341 INFO pid=488 tid=MainThread file=splunk_rest_client.py:_request_handler:105 | Use HTTP connection pooling 2020-10-16 14:37:05,289 INFO pid=488 tid=MainThread file=base_modinput.py:log_info:295 | Authentication status code: 201 2020-10-16 14:37:05,289 INFO pid=488 tid=MainThread file=base_modinput.py:log_info:295 | Successfully Retrieved Authentication Token 2020-10-16 14:37:05,563 ERROR pid=488 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-crowdstrike-intel-indicators/bin/ta_crowdstrike_intel_indicators/aob_py2/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-crowdstrike-intel-indicators/bin/crowdstrike_intel_indicators.py", line 77, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-crowdstrike-intel-indicators/bin/input_module_crowdstrike_intel_indicators.py", line 157, in collect_events indicators = intel['resources'] KeyError: 'resources'   Please advise. Thanks.  
I would like to generate a splunk URL that has: 1) the query to render 2) the visualization to render 3) some query annotations. the query, the visualizations are all fine.  I'm not sure how to g... See more...
I would like to generate a splunk URL that has: 1) the query to render 2) the visualization to render 3) some query annotations. the query, the visualizations are all fine.  I'm not sure how to get the annotations into the query. For example: take a look at the following query which I have, I can render a proper chart proper visualization (can cut and paste the url and someone can re-render the same).... but how about adding an annotation to the chart (I put red lines for a particular timeline) which I want the user to focus at.  Is it possible to say please draw a line at this epoch time, and another line at this epoch time?      
I want to create a setup where splunk monitors browsing from Firefox browser on ubuntu machine. If a user browses a blacklisted website a real time alert is created and admin is notified. breaking ... See more...
I want to create a setup where splunk monitors browsing from Firefox browser on ubuntu machine. If a user browses a blacklisted website a real time alert is created and admin is notified. breaking the problem to 2 separate isssue: 1) how do I get splunk to monitor Firefox browser on ubuntu 2) how do I create an alarm that goes to the admin (email, app etc) Thank you!
How do I resolve the  following error?     Error in 'SearchParser': The search specifies a macro 'aws-cloudtrail-sourcetype' that cannot be found. Reasons include: the macro name is misspelled... See more...
How do I resolve the  following error?     Error in 'SearchParser': The search specifies a macro 'aws-cloudtrail-sourcetype' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information.
Hello My Dev license expired while I was on PTO I just recieved my new one and applied it on a newly migrated node but I still get this error: Correct by midnight to avoid violation  Last time I ... See more...
Hello My Dev license expired while I was on PTO I just recieved my new one and applied it on a newly migrated node but I still get this error: Correct by midnight to avoid violation  Last time I had this issue I had to request a reset license. If I let the new license sit over night will it clear?
I have a list of malicious URL's that I have inputted into a lookup table called badurls.csv.  I created a field in the table called domains.  I want to compare that lookup table against an Index and... See more...
I have a list of malicious URL's that I have inputted into a lookup table called badurls.csv.  I created a field in the table called domains.  I want to compare that lookup table against an Index and specifically against a field called Domain to see if we have any traffic going to this list of malicious URL's.   My .csv file has over 3 million entries.  I tried the search below but its not giving me all results and its complaining about a 10,000 line subsearch limit.   index="dns" | eval d=substr(Domain, 1, len(Domain)-1) | search * [|inputlookup badurls.csv | rename domains as d | fields + d ] | stats count by d Any ideas on a better way to do this?
Hi, I am having trouble attempting to get a deployment server and a deployment client to communicate and then access data through the Splunk search using SSL with Splunk default certificates. What st... See more...
Hi, I am having trouble attempting to get a deployment server and a deployment client to communicate and then access data through the Splunk search using SSL with Splunk default certificates. What steps would I have to go through to achieve this? So I am trying to get my deployment server A with default certs cacert.pem and server.pem in /etc/auth to communicate with Server B which also has the same default certs in /etc/auth.  I have defined the Deployment Server server.conf and inputs.conf as shown:   [sslConfig] enableSplunkdSSL = false useClientSSLCompression = true serverCert = /xxxxx/splunk/etc/auth/server.pem sslPassword = password sslRootCAPath = /xxxx/splunk/etc/auth/cacert.pem certCreateScript = genMyServerCert.sh   inputs.conf   [SSL] serverCert = /xxxx/splunk/etc/auth/server.pem password = password rootCA = /xxxx/splunk/etc/auth/cacert.pem requireClientCert = false sslVersions = tls,-ssl3   On my  Server B or Deployment Client, my server.conf is defined as [sslConfig] enableSplunkdSSL = true [default] useClientSSLCompression = true serverCert = /xxxx/splunkforwarder/etc/auth/server.pem sslPassword = password sslRootCAPath = /xxxx/splunkforwarder/etc/auth/cacert.pem certCreateScript = genMyServerCert.sh What .conf files do I need to edit and what stanzas will I need to define on the Deployment Client(server B) for them to communicate and eventually I can search Server B on my search head? Sorry if this is unclear but I will be answering any questions on what I am asking. Thank you.
Right now I have a large multi search, each line specifying a different time range of days. Really we are gathering data by a daily, then weekly timeframe for some baselines. That is where the eval o... See more...
Right now I have a large multi search, each line specifying a different time range of days. Really we are gathering data by a daily, then weekly timeframe for some baselines. That is where the eval of time comes in, we assign the eval as day1, day2, day3... so then the data from that day has an eval in the table we can distinguish it from. I am not sure if for our need there is a better way but wanted to explore it for my own education. Updating 10+ lines of the same thing is not ideal. | multisearch [index=someindex sourcetype=somesourcetype name=test earliest=-1d latest=-2d | eval time = day1]   [index=someindex sourcetype=somesourcetype name=test earliest=-2d latest=-3d | eval time = day2] I was wondering if there is an easier way to define a value and then just loop through the search. I was thinking something like the following. | eval valueToUseAsIterator=..... index=someindex sourcetype=somesourcetype name=test earliest=-(valueToUseAsIterator)d latest=-(valueToUseAsIterator+1)d |eval time=day(valueToUseAsIterator) Edit: Added more to the search and information.
Hi Splunk community, How to count number of "area" between time range to show results like these: Between 1/1/19 to 6/30/19, there are 2 areas Between 7/1/19 to 12/31/19, there are 2 Between 1/1/... See more...
Hi Splunk community, How to count number of "area" between time range to show results like these: Between 1/1/19 to 6/30/19, there are 2 areas Between 7/1/19 to 12/31/19, there are 2 Between 1/1/20 to 6/30/20, there are 0 Between 7/1/20 to 12/31/20, there is 1 Between 1/1/21 to 12/31/21, there is 1 After 1/1/22, there are 2 => Raw data like this:  Area forecast_date area 1 6/17/19 area 2 8/3/21 area 3 10/29/20 area 4 7/14/17 area 5 9/30/26 area 6 7/29/19 area 7 9/16/19 area 8 3/4/24 area 9 1/1/19
I have used predict before and now am seeing perc, which I haven't used as much. What is the largest difference between these two? Is one favored over the other or are they different?
I have one requirement to calculate the time difference between multiple events based on JobId.  The logs are like below.  From the below logs I need to fetch time stamps for each jobId which having ... See more...
I have one requirement to calculate the time difference between multiple events based on JobId.  The logs are like below.  From the below logs I need to fetch time stamps for each jobId which having multiple events. And calculate the difference between the timestamps and assign to the jobId like : bw0a10db49 - (2 mins) 2020-10-14 12:41:40.468 INFO [Process Worker-9]Log - 2020-10-14T12:41:40.468-04:00 - INFO - jobId: bw0a10db49; Msg: Application testing.application started 2020-10-14 12:41:41.362 INFO [Process Worker-9]Log - 2020-10-14T12:41:40.468-04:00 - INFO - jobId: bw0a10db49; Msg: Application testing.application started 2020-10-14 12:41:42.480 INFO [Process Worker-6]Log - 2020-10-14T12:41:42.48-04:00 - INFO - jobId: bw0a10db49; Msg: EndOfFile Submited to ConcurentWebservice   Please suggest me with the query.   Thanks in advance.
Hi,  WHen i go into splunk console --> settings --> "All Configurations", i see 2000+ entries for seach and reporting app. How do i pull all these rows using rest api? I want to list all these kno... See more...
Hi,  WHen i go into splunk console --> settings --> "All Configurations", i see 2000+ entries for seach and reporting app. How do i pull all these rows using rest api? I want to list all these knowledge objects per author (owner). I tried something like this, but that did not give all the results. | rest "/servicesNS/-/search/saved/searches"  
I'm looking to create a chart that shows the pass/fail rate of an export process by code release dates rather than discretized time spans. Each release about a month long but they may vary by a few d... See more...
I'm looking to create a chart that shows the pass/fail rate of an export process by code release dates rather than discretized time spans. Each release about a month long but they may vary by a few days and typically span between 2 months (i.e. Oct. 14th to Nov 19th). My initial thoughts are to put the release dates into a .csv file with the following fields: release,priorreleasedate,implementationdate release1,01/15/2020,02/13/2020 release2,02/13/2020,03/14/2020 I have imported a similar .csv table with the lookup definition name "release_dates". Then I would like to count the number of pass/fails that occurred during each release. So the end result would be a column chart with Release number on the X-axis and count on the Y-axis. My intial search to get the events (or transaction) for pass/fail counts is as follows. index="myindex"  sourcetype="mysourcetype" | transaction key startswith="Export Job #" endswith="Exported OR Error" | eval result=if(eventtype="export_successful","Pass","Fail")  ...But I am stuck as to how I would compare the _time of these events with the date ranges in the release_dates lookup table. Any direction would be much appreciated!
I am creating a dashboard that unfortunately badly needs a kvstore lookup that lives on the ES search head. I know I can run REST calls across instances, but can I call the data in this kvstore on ES... See more...
I am creating a dashboard that unfortunately badly needs a kvstore lookup that lives on the ES search head. I know I can run REST calls across instances, but can I call the data in this kvstore on ES from my (non ES) search head and use it in a lookup? Of at least OUTPUTLOOKUP it?   I would imagine issues with app permissions etc... Any help is much appreciated. I can think of more clunky ways to do it but this one seems too elegant to ignore  
what does the below DataSize MB means on the index.conf file. just wanted to know how it works   homePath = volume:hot1/fxmp_core_prod/db coldPath = volume:cold1/fxmp_core_prod/colddb thawedPath ... See more...
what does the below DataSize MB means on the index.conf file. just wanted to know how it works   homePath = volume:hot1/fxmp_core_prod/db coldPath = volume:cold1/fxmp_core_prod/colddb thawedPath = $SPLUNK_DB/fxmp_core_prod/thaweddb homePath.maxDataSizeMB = 2048 --> what does this implies maxTotalDataSizeMB = 5120 --> what does this implies
Hi Splunkers, I've  been working over capacity planning where for estimating indexer requirement. I'm stuck while calculating disk space. Our indexers are supposed to only store internal logs. And... See more...
Hi Splunkers, I've  been working over capacity planning where for estimating indexer requirement. I'm stuck while calculating disk space. Our indexers are supposed to only store internal logs. And internal logs are only from UFs. Since no Splunk setup is present  at prod servers currently, I'm not able to get how many internal logs are generated per day by UF. What I did is I spun a VM ,installed a UF and found that 288k internal events per day i.e 200 events per minute are generated. While calculating disk space I'm considering 300 events/min i.e 200 + adding 100 events/min as a buffer. Can anyone help me to give a  that on average how many internal events are generated by 1 UF on prod server at every day or at every minute???  I understand there may be many factors like no.of addons, sources and all; just need to confirm that are prod servers event count around 300 events/minute,is the estimation in range?? In simple terms I need answer of this "query" for  prod servers to get events per minute "tstats count where index=_internal and host ="ANY one host randomly" groupby index, _time span=60s | stats avg(count) PS: I dont have splunk set up to check this prior deciding disk space using search head.
Splunkers I am new to the community and learning the Art of splunk!  I am searching raw data from a syslog server,  the data that I am pulling usually looks like this.  I post most of the data in ca... See more...
Splunkers I am new to the community and learning the Art of splunk!  I am searching raw data from a syslog server,  the data that I am pulling usually looks like this.  I post most of the data in case its needed.  but most of the data that's security relevant has been replaced by ficticious characters.    My focus is the "%ASA-6-106100" within the message log.  I want to be able to pull only the six digits in the string "106100"  So far I was able to develop a few Regular expression but in the process it pulls all the numbers that are place where the the "-6-" belongs.  which takes my data and makes it messy I want to tell splunk to only search data with 6 charactors and that's it.  index=syslog sourcetype=syslongisamazing "ASA" | rex field=Event_type_code ^(?<Events_code>\. \d\d\d\d\d\d) | table Event_Code_type This helps but like mentioned is pulls even the middle code withint my data.    Thanks for your help community.  2020-10-IST10:04:10.339 192.168.264.264|192.168.162.321| MFRTRSyslog0453 <234>Oct 15 2020 08:04:10 xxxx-xxxx0234: %ASA-6-106100: --> access-list xxxx-xxx-xxxx001_access_in permitted tcp xxx-x-xx-xxxxx  1.1.1.1(3454) hit-cnt 1 hit [oxbc660c9] [ox3a234t435a7f]