All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello guys, I'm pretty new to Splunk and I'd like to see if there is a way in which I could create a query that would dynamically populate the necessary table columns based on an initial search val... See more...
Hello guys, I'm pretty new to Splunk and I'd like to see if there is a way in which I could create a query that would dynamically populate the necessary table columns based on an initial search value passed in from a drop down input. For example, lets say my data contains multiple entries based on protocol, and I wish to display the results in a table. If the protocol is SFTP, I only want columns only pertaining to that protocol, I have about 5-10 unique protocols, and unique column requirements for each. I was attempting to build a search string to store the search I want based on a case statement, but that may not be possible. Something along these lines is what I want to achieve the protocol will be passed dynamically from a drop down input, and I understand how to pass that value. search protocol = "SFTP" |eval searchString = case( protocol == "SFTP", "remoteUserID=MyUserId, RemotePort=MyPort", protocol == "HTTPS" "externalURL=myURL, SSLCert=MyCert", 1=1, "Not Found" ) | search searchString I also was looking into directly modifying the xml based on an article I found in regards to displaying columns dynamically in splunk (not enough karma points to point links): which would work if I could have a unique table list. Maybe there is another way where I could call different queries in my panel based on the drop down value selected? Thanks!!
Hi, My Splunk head usage seems to be spiking at specified intervals. The reason seems to be a lot of alerts/cron that have been set up during these times. Is there a way for me to see the crons/q... See more...
Hi, My Splunk head usage seems to be spiking at specified intervals. The reason seems to be a lot of alerts/cron that have been set up during these times. Is there a way for me to see the crons/queries along with the CPU usage, execution time?
I have been banging my head against the wall for a while and would love some help. Imagine I have the two event logs and would like to create a table from them. The logs have an array value and I wan... See more...
I have been banging my head against the wall for a while and would love some help. Imagine I have the two event logs and would like to create a table from them. The logs have an array value and I want the last item in that array and I want the message value. Additionally, I want a top-level from each event. So if I have the following two logs. Event Log 1: { "description": "My description", "param.response.tracking": [ { "message": "My message" }, { "message": "My other message" } ] } Event Log 2: { "description": "My description 1", "param.response.tracking": [ { "message": "My message 1" }, { "message": "My other message 1" } ] } I want the resulting table: description, message "My description", "My other message" "My description 1", "My other message 1" I came to this question which is very close to what I want https://answers.splunk.com/answers/769708/how-to-access-a-property-on-the-last-element-in-an-1.html , but it doesn't work For me, this would be: | spath output=result path=param.response.tracking{} | eval res = mvindex(result,mvcount(result)-1) | table description, res.message Any help is appreciated.
Installed version 3.1.0 of the Tenable App and receiving the following error in the Splunk Web Service Log when trying to configure: Masking the original 404 message: 'Trying to reach the "TA_tenab... See more...
Installed version 3.1.0 of the Tenable App and receiving the following error in the Splunk Web Service Log when trying to configure: Masking the original 404 message: 'Trying to reach the "TA_tenable" app which does not have a User Interface' with 'Page not found!' for security reasons Anyone else have this error and know how to resolve?
Does anyone knows how to do this? Im having a trouble with this convertion. Thanks in advance
I have a customer that needs to have a dashboard showing a start date of Saturday and ending on the current workday. The search that I have tried, with no results, is: index=| eval today=strfti... See more...
I have a customer that needs to have a dashboard showing a start date of Saturday and ending on the current workday. The search that I have tried, with no results, is: index=| eval today=strftime(now(),"%a") | eval Backdays=case(today=="Mon",2,today=="Tue",3,today=="Wed",4,today=="Thur",5,today=="Fri",6) | search earliest=-"Backdays"d@d latest=now There is definitely data for this past Saturday, Sunday and today (Monday) and the "Backdays" field is populating with a 1 (in this case) Any help is most appreciated. Thank you.
i have a table as below. one two three four total five six i want the "total" column to be shown at the end always, like below, need help to do this.. one two three four five six total. a... See more...
i have a table as below. one two three four total five six i want the "total" column to be shown at the end always, like below, need help to do this.. one two three four five six total. also note that the column names are dynamic and can change, but "total" column name remains the same..
So i have numerous logs regarding user accessing app to order food for delivery. based on the session id, and user id, I'm able to find the first and last timestamp of each session and calculate t... See more...
So i have numerous logs regarding user accessing app to order food for delivery. based on the session id, and user id, I'm able to find the first and last timestamp of each session and calculate the duration of it. However, I also want to calculate the duration between user firstly access the app and the moment the user places order. basically each step the users engages with the app, there's a specific API for it. so the moment the user places order , there's field called route_path: API/place_order. I simply want to find out the timestamp where user placed order using this route_path field and find difference, anyone could help? appreciate it. the current query only finds the first and last timestamp for each session. index="some jason file" stats earliest(_time) as first,latest(_time) as last values(user_id) as user_id by session_id | convert ctime(first) as First ctime(last) as Last |eval duration=last-first | eval difference=strftime(duration,"%m/%d-%Y %H:%M:%S") | eval entire_session_duration=tostring(duration, "duration") | eval entire_session_time = replace(entire_session_duration,"(?:()+)?0?(\d+):0?(\d+):0?(\d+)"," \2h \3m \4s") | table user_id user_id session_id First Last entire_session_duration entire_session_time | search session_id!=""
All, We're reselecting our endpoint protection for Windows Servers and Workstation. I'd like to start with solutions that speak splunk or have good Splunk apps. That is to say, good logs, CIM, ta... See more...
All, We're reselecting our endpoint protection for Windows Servers and Workstation. I'd like to start with solutions that speak splunk or have good Splunk apps. That is to say, good logs, CIM, tags, great apps that can go into Splunk relatively easy. Obviously Splunk isn't the only measure of success here, but it's one element we're looking at. Any recommendations?
Hey there! I am wondering if it is possible to create a regex for field extration which extracts a string, but at the same time, leaves out part of the string. Let's say there is a logline with... See more...
Hey there! I am wondering if it is possible to create a regex for field extration which extracts a string, but at the same time, leaves out part of the string. Let's say there is a logline with: IP: 111.222.111.222 Now the extracted field should capture the IP, but without the dots (so the result should be "111222111222"). Is this even possible right at field extraction? Can you skip certain elements? Or can you extract each segment and then combine them somehow? Faik you can exclude with [^ ] but then you basically skip the whole entry and get nothing if this character occurs, which is not what I want. I want to identify the whole string, but then just capture just elements of it. Thank you!
I am looking for an efficient way to calculate the total bandwidth used per second on a device from our netflow data. The netflow data we receive contains a start and end time for the flow(timestamp... See more...
I am looking for an efficient way to calculate the total bandwidth used per second on a device from our netflow data. The netflow data we receive contains a start and end time for the flow(timestamp and endtime respectively) as well as the total bytes that have been transferred. It is simple enough to calculate BPS for each flow, but I cannot figure out how to calculate total bandwidth in a usable manor. Example netflow data: {"endtime":"2020-03-02T17:35:31.850000Z","timestamp":"2020-03-02T17:04:51.630000Z","bytes_in":64,"dest_ip":"xxx.xxx.187.28","dest_mask":0,"dest_port":5061,"dest_sysnum":0,"event_name":"netFlowData","exporter_ip":"10.136.57.2","exporter_sampling_interval":1000,"exporter_sampling_mode":1,"exporter_time":"2020-Mar-02 17:35:22","exporter_uptime":1553552496,"flow_end_rel":1553562346,"flow_start_rel":1551722126,"ingress_vlan":103,"input_snmpidx":114,"netflow_version":9,"nexthop_addr":"0.0.0.0","observation_domain_id":0,"output_snmpidx":0,"packets_in":1,"protoid":6,"seqnumber":54418,"src_ip":"10.136.216.199","src_mask":0,"src_port":1028,"src_sysnum":0,"tcp_flags":16,"tos":184} {"endtime":"2020-03-02T17:35:31.820000Z","timestamp":"2020-03-02T16:54:11.510000Z","bytes_in":68,"dest_ip":"xxx.xxx.187.28","dest_mask":0,"dest_port":5061,"dest_sysnum":0,"event_name":"netFlowData","exporter_ip":"10.136.57.2","exporter_sampling_interval":1000,"exporter_sampling_mode":1,"exporter_time":"2020-Mar-02 17:35:32","exporter_uptime":1553562496,"flow_end_rel":1553562316,"flow_start_rel":1551082006,"ingress_vlan":54,"input_snmpidx":49,"netflow_version":9,"nexthop_addr":"0.0.0.0","observation_domain_id":0,"output_snmpidx":0,"packets_in":1,"protoid":6,"seqnumber":54509,"src_ip":"10.136.189.15","src_mask":0,"src_port":1028,"src_sysnum":0,"tcp_flags":16,"tos":0} I have been able to come up with a solution, but it only works with very small timeframes. I would like something that is significantly more robust. The code below will only work with a very limited number of events: sourcetype=stream:netflow | dedup src_ip,src_port,dest_ip,dest_port,timestamp,exporter_ip | eval start_time = strptime(timestamp . "-0000", "%FT%T.%6QZ%z") | eval end_time = strptime(endtime . "-0000", "%FT%T.%6QZ%z") | eval diff_secs = end_time-start_time | eval diff = tostring((diff_secs), "duration") | eval bps=if(isnull(bytes_in/diff_secs),0,bytes_in/diff_secs) | addinfo | eval start_time_adj=if(start_time<info_min_time,info_min_time,start_time) | eval temp=mvrange(start_time_adj,end_time) | mvexpand temp | rename temp AS _time | bucket span=1s _time | timechart sum(bps) as total_bps
I have spent a few hours trying to solve this and viewing the forum, but no luck so far. I have a single dataset containing a chunk of data. I am trying to create a predictive forecast for capa... See more...
I have spent a few hours trying to solve this and viewing the forum, but no luck so far. I have a single dataset containing a chunk of data. I am trying to create a predictive forecast for capacity consumption - but I'd like to display this in a Trellis dashboard, grouped by the value of one of the fields in the dataset. My data is quite consistent. This is as close as I can get: index="my_data" source="capacityStats" | timechart span=7d max(machinesAllocatedPercentage) as Machines, max(storageGBAllocatedPercentage) as Storage, max(memoryGBAllocatedPercentage) as Memory | predict Machines, Storage, Memory The problem is that this is a total average of ALL data and predictions, therefore. My data maps to multiple capacity sources. If I amend my query to: index="my_data" source="capacityStats" | timechart span=7d max(machinesAllocatedPercentage) as Machines, max(storageGBAllocatedPercentage) as Storage, max(memoryGBAllocatedPercentage) as Memory by CapacitySource Then I can drill down and display my statistics using the Trellis feature, simply by Selecting 'CapacitySource' in the 'Split By' Option for the Trellis. Seemingly, I am unable to do this in combination with the Predict analysis? Does anyone know of a workaround? Creating separate dashboards manually for each Capacity source won't suffice since I will occasionally get new ones, or lose old ones - and I need the dashboard to be self-maintaining. I appreciate any help.
Are the Splunk UF 7.2.x releases compatible with being run on Linux kernel versions 4.x, specifically RHEL 8?
I've spent the last week trying to figure out the answer to this myself in the documentation and in the questions. I'm sure this is easy if you've been using Splunk for any length of time, but I'm... See more...
I've spent the last week trying to figure out the answer to this myself in the documentation and in the questions. I'm sure this is easy if you've been using Splunk for any length of time, but I'm very new. Also, I've submitted a project request for the Splunk team to help me, but they won't even touch it until it goes through an approval process. Here's my question: I have the following Splunk query that works: index=MyWebServer ("WebService_01" AND "input") OR ("WS Total time") | transaction TID host startswith="input" endswith="WS Total time" | timechart span=1m count, avg(WSTotalTimeValue), max(WSTotalTimeValue), perc95(WSTotalTimeValue) I need to add 2 more columns and add more web service names. Consider the following to be psuedocode: index=MyWebServer (("WebService_01" OR "WebService_02" OR "WebService_03" OR "WebService_04") AND "input") OR ("WS Total time") | transaction TID host startswith="input" endswith="WS Total time" | timechart span=1m username, webservicename, count, avg(WSTotalTimeValue), max(WSTotalTimeValue), perc95(WSTotalTimeValue) I've tried a variety of stats, bin, chart, etc. commands to try to get it to work, but the syntax is just to new to me to get it to work. Any advice would be appreciated. Thanks.
Hi, Currently, the Cloudwatch Input is collecting all metrics for all of my S3 buckets as shown here: [{"BucketName":[".*"],"StorageType":[".*"]}] How do I specify just one S3 bucket usin... See more...
Hi, Currently, the Cloudwatch Input is collecting all metrics for all of my S3 buckets as shown here: [{"BucketName":[".*"],"StorageType":[".*"]}] How do I specify just one S3 bucket using the syntax above? Thanks Eddie
Hello all, I am trying to blacklist some of the apps below. It doesn't matter what I do, the apps continue to get deployed to our QA search head. I had already checked whether these apps are being... See more...
Hello all, I am trying to blacklist some of the apps below. It doesn't matter what I do, the apps continue to get deployed to our QA search head. I had already checked whether these apps are being being deployed there via any other server class and they are not. According to the docs, the blacklists below should work, right? I tried different ways of blacklisting them with no success... I would greatly appreciate any help. Thank you. [serverClass:all_gensearch] filterType = whitelist whitelist.0 = spkprtsrch01*|spkqatsrch* restartSplunkd = false issueReload = true [serverClass:all_gensearch:app:SA-ldapsearch] [serverClass:all_gensearch:app:splunk_app_windows_infrastructure] [serverClass:all_gensearch:app:Splunk_TA_microsoft_ad] [serverClass:all_gensearch:app:Splunk_TA_microsoft_dns] [serverClass:all_gensearch:app:TA-maclookup] [serverClass:all_gensearch:app:TA-user-agents] [serverClass:all_gensearch:app:TA_cisco_cdr [serverClass:all_gensearch:app:Splunk_TA_nginx] blacklist.0 = spkqatsrch* [serverClass:all_gensearch:app:SA-nix] restartSplunkd = false [serverClass:all_gensearch:app:splunk_app_jenkins] blacklist.0 = spkqatsrch* [serverClass:all_gensearch:app:NetSkopeAppForSplunk] blacklist.0 = spkqatsrch* [serverClass:all_gensearch:app:TA-Zscaler_CIM] blacklist.0 = spkqatsrch* [serverClass:all_gensearch:app:duo_splunkapp] blacklist.0 = spkqatsrch* [serverClass:all_gensearch:app:zscalersplunkapp] blacklist.0 = spkqatsrch* [serverClass:all_gensearch:app:TA-Zscaler_CIM] blacklist.0 = spkqatsrch* [serverClass:all_gensearch:app:GSuiteForSplunk] blacklist.0 = spkqatsrch*
I am a newbie and I have understood basics on how to use the props.conf. But I dont find any doc on ingesting events from AWS SQS then how do I config the props.conf file to include event_timestamp a... See more...
I am a newbie and I have understood basics on how to use the props.conf. But I dont find any doc on ingesting events from AWS SQS then how do I config the props.conf file to include event_timestamp as _time. Definition says in props.conf is always based on source | sourcetype | host ; correct me here if I am wrong. But in case of AWS SQS, all the 3 values are same for more than 1 index. I want this change only for 1 specific index. Appreciate some insight sourcetype: aws:s3:accesslogs source: "s3://jjacob-stats/prod/*.gz" host: ip-10-0-0-255
Hello, Trying to determine Best Practices for the following, and I don't want to reinvent the wheel if a Splunker had already resolved this issue. This is for a printer dashboard. This is a m... See more...
Hello, Trying to determine Best Practices for the following, and I don't want to reinvent the wheel if a Splunker had already resolved this issue. This is for a printer dashboard. This is a minimized small scale of reality. Setup • 5 printers: A, B, C, D, E • 2 printer status’: UP, DOWN • Dashboard will be refreshed every 5 minutes searching for the latest status of printers A – E Process • The 1st 5 minutes, printers A – E show status as UP • The 2nd 5 minutes, printers A – D show status as UP, E as DOWN Problem • The 3rd 5 minutes, printers A – D are UP, E is ???? This is because the print server has not received any events from printer E; therefore, neither has the Splunk indexers Possible solution (50,000 feet) 1. Lookup table that stores the last status ingested for each printer, including time 2. The next time the search is run (5 minutes later) any printers missing a status, "No Printer Events!", will be searched for in the lookup table 3. The dashboard will be populated with the lookup status for the printer 4. Once the dashboard is fully populated, the lookup table will be cleared of all rows and repopulated from the dashboard status (status saved in a token with time for each printer) I think this will work, but it will be a lot of coding. A first response* to the above might be increase the search time range from 5 minutes to 60 minutes, or 4 hours, or 24 hours, etc. Problem is at some point, a printer will have sent its status before that new time range. Below is the reality. Case in point. Because of the limitation on how many images I can upload, these 3 time ranges (15 mins, 4 hours, all-time) have been combined into one image. Notice the different statuses, especially oix21. This printer was offline between 16 minutes and 4 hours ago. If the Helpdesk only had the 15-minute view, they would not know this printer is down, because a down printer doesn’t write logs to the print server. Now we discover printer rv44 status is “toner low”. According to our 15-minute this printer had no recent events. This is with only 2 possible statuses/statii. There are over 15 (door open, out of paper, etc.). Another possibility is to have the printers send a status update every 5 minutes. We are looking into that. I hope I did not convolute things with my explanation. Is there a Splunk Best Practice for storing latest status (and time thereof), updating it when a new status is learned? Hmm kvstore perhaps?????
Are there any suggestions / best practices on handling Daylight savings time change ? Stop Appdynamics from Sending out false alerts during the one hour time change window while still keeping monitor... See more...
Are there any suggestions / best practices on handling Daylight savings time change ? Stop Appdynamics from Sending out false alerts during the one hour time change window while still keeping monitoring active? This Sunday we're springing forward so there will be an hour with no data. Last time this happened we received several false alerts.
Not sure what happened this morning but I was unable to log in as admin. I noticed that it had orphan some of my alerts with my admin account even though it was still active and found this error belo... See more...
Not sure what happened this morning but I was unable to log in as admin. I noticed that it had orphan some of my alerts with my admin account even though it was still active and found this error below. Has anyone ran into a similar situation. I was able to login after restarting the Splunk service on the Indexer and forwarder. Thanks again. startup:112 - Unable to read in product version information; [HTTP 401] Client is not authenticated