All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, is there a way to pass to a report the filename of a csv as variable, to use it as lookup file ? Example: | savedesearch my_report file=my_file.csv where my_report | some s... See more...
Hi all, is there a way to pass to a report the filename of a csv as variable, to use it as lookup file ? Example: | savedesearch my_report file=my_file.csv where my_report | some search | lookup $file$
Morning everyone, Came in to work today and seeing this error. Anyone familiar with it? What's the impact and fix? Stock install of Splunk with no custom certs. 03-16-2020 03:01:55.285 -0700 ERR... See more...
Morning everyone, Came in to work today and seeing this error. Anyone familiar with it? What's the impact and fix? Stock install of Splunk with no custom certs. 03-16-2020 03:01:55.285 -0700 ERROR ExecProcessor - message from "/opt/splunk/bin/python2.7 /opt/splunk/etc/apps/splunk_instrumentation/bin/instrumentation.py" HTTPSConnectionPool(host='e1345286.api.splkmobile.com', port=443): Max retries exceeded with url: /1.0/e1345286/81416994-c2ef-5c6f-a3de-68fb09953b0d/100/0?hash=none (Caused by SSLError(SSLError(1, u'[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:742)'),))
So I have a seperate folder that was prebuilt from splunk universal forwarder. The folder path is : /opt/splunkforwarder/etc/apps/"MY folders HERE" one of the folders under /apps IS sendin... See more...
So I have a seperate folder that was prebuilt from splunk universal forwarder. The folder path is : /opt/splunkforwarder/etc/apps/"MY folders HERE" one of the folders under /apps IS sending the other folder is not and all it has is a path of /apps/NOT SENDING FOLDER/local/input.conf inside inputs.conf I have [monitor:///var/log/router/.log] host_regex=router/(.).log sourcetype=cisco index=net crcSalt= disabled = 0 this is not monitoring the folder and NO logs are going into splunk however in the correct folder that is sending i have [monitor:///var/log/security.log] sourcetype = seclog index = sec disabled = 0 I also have the following folders in the correct logs that i do not have in the no working log default local metadata README.md static was wondering if anyone can point me in the direction to help me figure out why one folder is sending but the other isnt.
Hi, I want to embed the outcome of Jenkins latest version(https://updates.jenkins.io/stable-2.204/latestCore.txt) in a splunk dashboard and change the font size. I can change the background color b... See more...
Hi, I want to embed the outcome of Jenkins latest version(https://updates.jenkins.io/stable-2.204/latestCore.txt) in a splunk dashboard and change the font size. I can change the background color but I couldn't find a way of changing the text size. Any ideas? <dashboard> <row> <panel> <html> <iframe style="background: "#FF0000" src="https://updates.jenkins.io/stable-2.204/latestCore.txt">></iframe> </html> </panel> </row> </dashboard>
Hello, I'm trying to be a good citizen of Splunk Answers. The nature of my job I, I'm sure this is the same for most of us, get pulled in different directions. I post a Splunk question one day, and... See more...
Hello, I'm trying to be a good citizen of Splunk Answers. The nature of my job I, I'm sure this is the same for most of us, get pulled in different directions. I post a Splunk question one day, and I'm asked at work to shift gears. I return to the question(s) a few days or a few months later. Sometimes, I've answered my own question. While at other times, research on a different Splunk topic answered my question. I hope I am not rambling. I want to close out questions that I still have opened (if that is possible). I want to redirect open questions to other open questions consolidating them around the same topic. For instance, I have questions on Linux audit logs, and tiles on dashboard. I don't want to delete unanswered questions. And, though I don't have a lot of karma, I want to reward those who took the time to answer and help out, even if their comment wasn't the correct answer, it made me think and dig deeper. Thanks in advance and God bless, Genesius
Please how can I integrate Microsoft SOC as a Service with Splunk? what are the business benefits
We are using pulse secure as our VPN solution and I'm looking to build a search that tracks concurrent users per hour. Using my account as a test, I see the first event starts with, "Primary authenti... See more...
We are using pulse secure as our VPN solution and I'm looking to build a search that tracks concurrent users per hour. Using my account as a test, I see the first event starts with, "Primary authentication successful for*" and ends with "Closed connection*" so based on that I created the following search: index=foo | transaction startswith="Primary authentication successful for*" endswith="Closed connection*" | eval count=1 | timechart per_hour(eval(count)) as "Concurrent Users" Is this a valid search for concurrent users? Thx
Is there a setting that is able to be added to the indexes.conf on the Indexers to move the drive the SmartStore Cache Manager resides on? The Splunk environment is currently on one drive and we need... See more...
Is there a setting that is able to be added to the indexes.conf on the Indexers to move the drive the SmartStore Cache Manager resides on? The Splunk environment is currently on one drive and we need the cache manager to reside on another. Any guidance is greatly appreciated!
In a typical splunk cloud environment do logs get forwarded from onprem directly to the cloud indexer or is best practice to have some type of collector such as a heavy forwarder onprem which will co... See more...
In a typical splunk cloud environment do logs get forwarded from onprem directly to the cloud indexer or is best practice to have some type of collector such as a heavy forwarder onprem which will collect and forward to the cloud indexer?
I'm able to push my syslog info from my asus (RT-AC88U) to a splunk server running Ubuntu 18.04 in my network. I receive data on UDP port 514. I'm able to see that in splunk. On my splunk server I ha... See more...
I'm able to push my syslog info from my asus (RT-AC88U) to a splunk server running Ubuntu 18.04 in my network. I receive data on UDP port 514. I'm able to see that in splunk. On my splunk server I have installed the home monitor app. I’m able to see the dashboard from blocking traffic. Looks nice However I do not see bandwidth dashboard data. Is this data also available in the syslog the ASUS router is sending. I strongle have the feeling that this is not the case, I have enabled firewall logging. Level of syslog in router is set to info. In the router itself I can see in traffic measurements. I can select daily / weekly by client / totals etc even real time bandwidth consumption. I hoped this data would be available in the home APP monitor now. Did a lot of search on the internet but so far did not find a possible solution. Hope that somebody here can explain why I do not see the bandwidth monitor dashboard.
Hi, I have two types of messages, I would like to receive the numbers from these logs : 2020-03-16 15:12:15,304 services/text123456: Periodic connection check - 1659 active services! ... See more...
Hi, I have two types of messages, I would like to receive the numbers from these logs : 2020-03-16 15:12:15,304 services/text123456: Periodic connection check - 1659 active services! 2020-03-16 15:11:56,173 services/textabcdef: NUMBER OF ACTIVE services: 1123 Thanks for help
I was wondering if anyone else has seen this. I had the Pulse Sec admin send some logs to my syslog-ng server. I'm showing an example of the log below: Mar 16 08:45:49 10.51.56.4 1 2020-03-16T1... See more...
I was wondering if anyone else has seen this. I had the Pulse Sec admin send some logs to my syslog-ng server. I'm showing an example of the log below: Mar 16 08:45:49 10.51.56.4 1 2020-03-16T13:45:49Z 192.168.2.1 PulseSecure: - - - 2020-03-16 13:45:49 - OmmitedName - [127.0.0.1] System()[] - ..Ommmted... My logs are coming in with "PulseSecure: - - - 2020-03-16 13:45:49" which doesn't match any of the sample logs inside the TA. However, it appears to be expected since the TA is looking for "TIME_PREFIX = PulseSecure:\s-\s-\s-\s". Something still is not correct as evident by this extract not working properly: EXTRACT-priority = ^\d+\s\<(?<priority>\d+) EXTRACT-header = ^(?P<header>\d+) Obviously I could recreate these extractions but still trying to figure out what is happening incorrectly.
Please, is there any checklist or guideline for troubleshooting or running a maintenance check on an enterprise Splunk environment?
I have below query index=f5 partition="/Common/-" | rex "Username\s+'(? (.*))'" | eval Username=coalesce(Username, user) username is there but first attempt he left empty and in second tr... See more...
I have below query index=f5 partition="/Common/-" | rex "Username\s+'(? (.*))'" | eval Username=coalesce(Username, user) username is there but first attempt he left empty and in second try he add his username. so the Username is showing null values whereas the default user field is showing actual username. I am using coalesce because I want to take either value but it should not be null. How can I achieve this.
Hello, Could you please let me know if this add-on is working with Bitbucket Cloud as well? Or just with BItbucket Server? Regards,
Anyone has ideas about this problem, I am not sure in which situation it will create disabled_rb in /colddb. Any documentation about this kinds of buckets?
I have a long established forwarding situation where a network device writes its log files to a linux host over a network channel that acts as log collection point for 4 network devices. We cannot in... See more...
I have a long established forwarding situation where a network device writes its log files to a linux host over a network channel that acts as log collection point for 4 network devices. We cannot install splunk on the network device. The sourcetype is a custom application type. Illustration: network device 1 network device n --> linux log host (1 or 2) network device 4 Today we set the host to the network device statically using host=xyz in the inputs.conf based on the monitored filename. My question is can I continue to do this and also add a new field called e.g. log_host somehow which is set to the linux log host's name. I would like to maintain the current host field value as it is just now to prevent disruption to existing reports etc hence adding this new field. Is this possible?
I have a json file with some information regarding soa requests. Basically info such as callee, caller, start and end timestamp of a request (write me if you want more details). By the way I have ... See more...
I have a json file with some information regarding soa requests. Basically info such as callee, caller, start and end timestamp of a request (write me if you want more details). By the way I have a splunk-query to group all the events by the callee and calculate the average duration of the difference between end and start timestamp. Something like this: | tstats values where index=my_index by callee, timestampStart, timestampEnd | eval duration= round((strptime(timestampEnd , "%Y-%m-%dT%H:%M:%S.%6N%Z") - strptime(timestampStart, "%Y-%m-%dT%H:%M:%S.%6N%Z")),2)| stats avg(duration) as duration by callee| eval duration=round(duration,2) |table callee, duration In addition to the avarage duration I would also add a column with the count of all events regarding that callee but (if I understand well) I can do this only with a Tstats count. Any ideas? Thank a lot
Using the following search, I'm seeing AWS CloudTrail ingest lag between 4 and 9 hours. index=ibp_aws sourcetype=aws:cloudtrail* | eval lag=round((_indextime - _time)/60,1) | bin _time span=10... See more...
Using the following search, I'm seeing AWS CloudTrail ingest lag between 4 and 9 hours. index=ibp_aws sourcetype=aws:cloudtrail* | eval lag=round((_indextime - _time)/60,1) | bin _time span=10m | stats max(lag) AS xLagH min(lag) AS nLagH count by _indextime | eval _time=_indextime | timechart span=10m max(xLagH) min(nLagH) sum(count) If the search is correct, any idea why AWS CloudTrail ingest would lag like this? I'm on Splunk Enterprise 7.0.1 and Splunk_TA_aws 4.4.0.
Team, I am looking for a way to generate a summary report on cases that we have in Phantom ? Which will include case ID, case name, Assignee, Start data, end date , status etc. I dont find an opt... See more...
Team, I am looking for a way to generate a summary report on cases that we have in Phantom ? Which will include case ID, case name, Assignee, Start data, end date , status etc. I dont find an option to generate such reports in Phantom GUI currently. Please help me if anybody found any solution for this. I am looking for ways to pull data with the help of Rest API. Please post if anyone has worked on this so far ? Thank you, MK