All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

eventtype="*" "screen" OR "ui1" | stats count AS TotalEvents by product | appendcols [search eventtype="*" "ui2" OR "ui3" | stats count AS subsetEvents by product] | eval percentage = 100 * subset... See more...
eventtype="*" "screen" OR "ui1" | stats count AS TotalEvents by product | appendcols [search eventtype="*" "ui2" OR "ui3" | stats count AS subsetEvents by product] | eval percentage = 100 * subsetEvents / TotalEvents | where percentage > 1 The performance of this query is slow. I want to calculate percentages based off of subsetEvents and totalevents. TotalEvents is retrieved from "screen" or "ui" events. subsetEvents is retrieved from "screen1" or "ui1". Any help is highly appreciated.
Hi, While receiving overall avg response time alert, we want to see particular value when alert triggered for definded threshold. e.g like i configured threshold in warning as 75 n at that time ove... See more...
Hi, While receiving overall avg response time alert, we want to see particular value when alert triggered for definded threshold. e.g like i configured threshold in warning as 75 n at that time overall avg respone time metric value was 78, i received the alert that it breached defined threshold but i am not able to see that particualar value i.e 78(what was the value at that moment when alert trigerred). how can i fix this.
I manage to extract the data from Splunk below: ID SignalStrength TimeStamp 01 3 09:00:05 01 0 0... See more...
I manage to extract the data from Splunk below: ID SignalStrength TimeStamp 01 3 09:00:05 01 0 09:30:00 02 0 09:00:05 02 0 09:30:00 02 3 09:55:00 But I wanted to reduce it further to only get the last record in the hour, like this: ID SignalStrength TimeStamp 01 0 09:30:00 02 3 09:55:00 I tried this: | stats max(Timestamp) by ID, SignalStrength but it gave me the maximum on the day not per hour.
Hello I have this dispatch directory getting filled by by RemoteStorageRetrieveIndexes_* directory getting created multiple times in a minute. I am not sure where this is coming from. I checked all t... See more...
Hello I have this dispatch directory getting filled by by RemoteStorageRetrieveIndexes_* directory getting created multiple times in a minute. I am not sure where this is coming from. I checked all the saved searches, alerts. I even recursively grepped the entire splunk config directory but found nothing defined by this name. I think this is causing issue with search disk quota being exhausted. What could be creating this directory? It only started happening recently.
Im testing out Splunk for my home network and I'm running into an issue. I have configured my home router (Ubiquiti Dream Machine) to forward syslog to my virtual instance of Splunk. I have rec... See more...
Im testing out Splunk for my home network and I'm running into an issue. I have configured my home router (Ubiquiti Dream Machine) to forward syslog to my virtual instance of Splunk. I have reconfigured the default udp port 514 to udp port 1514. I can confirm that the VM is receiving the logs via Wireshark. I feel like its something small, but I can't figure it out. I used the "Data Inputs" wizard to capture the data. Any help here would be greatly appreciated.
Hello Everyone!! I have a sample data as below Analyst Span A 1049d 00h 00m B 430d 01h 00... See more...
Hello Everyone!! I have a sample data as below Analyst Span A 1049d 00h 00m B 430d 01h 00m C 225d 05h 00m I would like to add one more column which basically convert the span column into number of years. Here d suggests number of days, h suggests an hour and m suggests minute in the Span column. Thanks in advance!!
Prior to updating to Splunk Enterprise 8.0.2 scheduled accelerated reports ran extremely fast: Report A Duration: 37.166 Record count: 314 After updating to Splunk Enterprise 8.0.2 the report ra... See more...
Prior to updating to Splunk Enterprise 8.0.2 scheduled accelerated reports ran extremely fast: Report A Duration: 37.166 Record count: 314 After updating to Splunk Enterprise 8.0.2 the report ran extremely slow: Report A Duration: 418.621 Record count: 300 Given the patch notes for 8.0.2 – I'm not seeing any changes to acceleration or summary indexing, so is it safe to assume this is a fluke? The massive increase in report generation (job) time of the scheduled accelerated reports appears to be caused by them no longer accessing the corresponding report acceleration summary. The "Access Count" never goes up when the scheduled reports are run. Guess we'll wait for 8.0.3 to fix this. Troubleshooting steps attempted: Manually rebuild Report Acceleration Summaries Delete all affected Report Acceleration Summaries Delete and recreate affected production reports – recreated schedule and checked box for acceleration Check filesystem permissions of inputlookup csv - confirmed -rw-rw-r-- splunk splunk
Hi... I've installed SAW and everything is proceeding fine until I get to the CHECK DATA part of the setup. What happens is that when it does search checks it returns invalid...or, no data within 2... See more...
Hi... I've installed SAW and everything is proceeding fine until I get to the CHECK DATA part of the setup. What happens is that when it does search checks it returns invalid...or, no data within 24 hours. I copied the search that they used which is just sourcetype="Perfmon*" | head 5 and it does indeed return nothing. BUT, if I do index=oswinperf sourcetype="Perfmon*" | head 5 it works. So, how do I change the search setting inside of the configuration wizard so I can start using SAW? Thanks in advance for your help.
hello experts, I'm trying to do a simple thing but I'm not able to figure it out. so, my problem is that I want to produce a table based on a condition, like below: if condition=TRUE, stats val... See more...
hello experts, I'm trying to do a simple thing but I'm not able to figure it out. so, my problem is that I want to produce a table based on a condition, like below: if condition=TRUE, stats values(A) as A, values(B) as B by C, ("ELSE") stats values(Z) as Z, values(X) as X by Y SO, if the condition is true I want to built a table with certain variables, otherwise with some others. Thanks much.
search 1...|table src_ip search 2: tag=authentication user!=*$ src_ip=xx.xx.xx.xx | head 1 | table user src_ip from search 1 result i need to find user so i have search 2 to find that... See more...
search 1...|table src_ip search 2: tag=authentication user!=*$ src_ip=xx.xx.xx.xx | head 1 | table user src_ip from search 1 result i need to find user so i have search 2 to find that but i want to show both results in one search i tried like this search1....| table src_ip | join type=left src_ip [|search tag=authentication user!=*$ src_ip=$src_ip$ | head 1 | table user src_ip but not able to find result can some one help
Has anyone managed to get a .NET Core for Linux agent to work on AWS Fargate?  We've had a number of attempts across multiple teams working on it with no luck.    The controller receives the request... See more...
Has anyone managed to get a .NET Core for Linux agent to work on AWS Fargate?  We've had a number of attempts across multiple teams working on it with no luck.    The controller receives the request to create a node just fine, but it doesn't seem to be able to pull in any of the normal metrics, or even collect basic agent metrics, like Agent|Availability.   No BTs are discovered, even when creating a standard ASP_DOTNET or POCO discovery rule to supersede the standard OOTB discovery rules. Just wondering if there are any users that were able to get something to work on Fargate. https://docs.aws.amazon.com/AmazonECS/latest/developerguide/AWS_Fargate.html 
greetings!! is the splunk enterprise 7.2.6 vulnerable? kindly help me with more information, i need to know if am vulnerable to this version "Splunk enterprise 7.2.6"? what are the steps t... See more...
greetings!! is the splunk enterprise 7.2.6 vulnerable? kindly help me with more information, i need to know if am vulnerable to this version "Splunk enterprise 7.2.6"? what are the steps to update to the latest version? I need help! Thank you in advance!
Hi all, following up on https://answers.splunk.com/answers/808200/splunk-alerts-not-sending-e-mail.html?childToView=810356#answer-810356. I wanted to figure out if there were any permissions ne... See more...
Hi all, following up on https://answers.splunk.com/answers/808200/splunk-alerts-not-sending-e-mail.html?childToView=810356#answer-810356. I wanted to figure out if there were any permissions needed to enable a splunk alert from my account. Is there a way I can check the permissions needed to create a working splunk alert (that sends out an email)? Not sure if i'm providing enough information, so please let me know if i need to provide more.
Here is a snippet of a log file that I am trying to do line breaking on. I want it to only break when one line has matches "info]*" and the next line has "info]Line" [2019-12-18 07:00:01.070924-0... See more...
Here is a snippet of a log file that I am trying to do line breaking on. I want it to only break when one line has matches "info]*" and the next line has "info]Line" [2019-12-18 07:00:01.070924-07:00|info]Line 3: :begin [2019-12-18 07:00:01.070924-07:00|info]Line 4: [2019-12-18 07:00:01.070924-07:00|info]Line 5: WORKINGDIR "C:\Download\Server1" [2019-12-18 07:00:01.070924-07:00|info]*Working directory: C:\Download\Server1\ [2019-12-18 07:00:01.070924-07:00|info]Line 6: [2019-12-18 07:00:01.070924-07:00|info]Line 7: FTPLOGON "Server1" /timeout=60 [2019-12-18 07:00:01.070924-07:00|info]*Logging on to <server1> as SFTP (SSH File Transfer Protocol) [2019-12-18 07:00:01.070924-07:00|info]*Logon in progress... [2019-12-18 07:00:03.055523-07:00|info]*Logon successful. [2019-12-18 07:00:03.055523-07:00|info]Line 8: FTPCD "Extracts" [2019-12-18 07:00:03.164909-07:00|info]*Current FTP site directory: /Extracts/ [2019-12-18 07:00:03.164909-07:00|info]Line 9: IFERROR= $ERROR_SUCCESS GOTO Operation1 [2019-12-18 07:00:03.164909-07:00|info]Line 21: :Operation1 [2019-12-18 07:00:03.164909-07:00|info]Line 22: FTPGETFILE "*na_alert_subs*" /newest [2019-12-18 07:00:03.164909-07:00|info]*Hint: FTPGETFILE /newest always returns the newest file [2019-12-18 07:00:03.430561-07:00|info]Line 22: *%sitefile has been set to: na_alert_subs_20191217.txt [2019-12-18 07:00:03.446223-07:00|info]Line 23: RCVFILE %sitefile /delete [2019-12-18 07:00:03.446223-07:00|info]*Receiving to "C:\Download\Server1\na_alert_subs_20191217.txt" [2019-12-18 07:00:12.947244-07:00|info]*Complete, received 1394788 bytes in 9 seconds (1513.44K cps) [2019-12-18 07:00:13.103506-07:00|info]*File deleted on FTP site. [2019-12-18 07:00:13.103506-07:00|info]*Download complete, 1 file received. So in that snippet it would break down into five events.
Greetings!!! hope you are doing well! how to update Splunk Enterprise from 7.2.6 to 8.0 version? what is required? is it free? please i need help! Thank you in advance
I am having issues with a search / Sub-search with appendcols when the number of rows are different. I have a summary search to collect the license usage data by index into a summary index for th... See more...
I am having issues with a search / Sub-search with appendcols when the number of rows are different. I have a summary search to collect the license usage data by index into a summary index for the the MBs Usedfor each index I have in the environment. The data looks like this in the index: Index=syslog MBs Used=660336 Index=wineventlog MBs Used=347123 Now we are trying to get a search that will show us the % difference for the index by day or week This is to provide us information if our license volume shoots up, we can find the indexes involved in this issue. The search that uses this summary index is below. This was working fine until we added a new index. Once the new index was added, each Index2 column was off by 1 row once that index name hit in the list. And this is skewing the Index2 column so the comparison (%difference) is skewed. I expect the same will happen as we decommission indexes also. Any assistance on how to take this issue into account having a different number of rows for each search would be appreciated. %difference search - comparing yesterday to 7 days ago index=index_metric_summary "License Pool"=site1 earliest=@d latest=now | sort Index | stats median("MB's Used") as yesterday by Index | appendcols [ search index=index_metric_summary "License Pool"=site1 earliest=-7d@d latest=-6d@d | sort Index | rename Index as Index2 | stats median("MB's Used") as previous by Index2] | eval %difference=round(((yesterday-previous)/previous)*100,2) | table Index Index2 previous yesterday %difference This is what the output looks like around the point with the new index is listed Index Index2 previous yesterday %difference atapower atapower 376 400 6.38 cardfile cardfile 596 594 -0.34 cbs_logs_new cbs_logs_qa 0 63890 249778 <<-- wrong compare for %difference cbs_logs_qa cbs_logs_stress 1046 7262 94.26
I have a heavy forwarder onprem installed on a windows OS. I am troubleshooting why logs are not coming into the splunk cloud indexer from a cloud service over API. The api is between my onprem ... See more...
I have a heavy forwarder onprem installed on a windows OS. I am troubleshooting why logs are not coming into the splunk cloud indexer from a cloud service over API. The api is between my onprem splunk heavy forwarder and the cloud service. I suspect the problem is on the cloud service side. I need a way to tell if the logs are even making it to my heavy forwarder. Is there a way to tail a running log on the heavy forwarder? Also I am referring to the onprem slunk server as a heavy forwarder. Is that the proper term? It sends data to the cloud indexer.
Hello Everyone, I'm trying to put together a regex statement that will allow me to select only the XML nodes that contain values. In the actual data, there are tons of XML nodes, some may have dat... See more...
Hello Everyone, I'm trying to put together a regex statement that will allow me to select only the XML nodes that contain values. In the actual data, there are tons of XML nodes, some may have data, some may not. Instead of defining all of them individually, I'd like to make it more dynamic. My thought was to use a regex in order to select only the nodes that have values, and then use a table * type command to send what's pulled back to a table. If there is a better way to do this using spath or xpath, I'm all ears! So far, I can achieve what I want to if the XML is on individual lines, using the expression below. The problem is, the XML is streamed, and this expression will not work for streamed XML. I spent a few hours trying to get a working regex to no avail. Any help is greatly appreciated! Regex that works for XML on individual lines. This will omit empty tags and select all other values. (.+)>(?![<\/]).+ <root> <Node1>Value1</Node1> <Node2>Value2</Node2> <Node3></Node3> <Node4></Node4> <Node5>Value3</Node5> <Node6>Value4</Node6> </root> Actual Data is contained all on one line - Unable to get a regex that can do what is being done above. <root><Node1>Value1</Node1><Node2>Value2</Node2><Node3></Node3><Node4></Node4><Node5>Value3</Node5><Node6>Value4</Node6></root>
I have this search, and it works correctly: source=foo resource=bar earliest=-1d@d latest=now | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | rex max_match=0 "(?:'id': )(?... See more...
I have this search, and it works correctly: source=foo resource=bar earliest=-1d@d latest=now | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | rex max_match=0 "(?:'id': )(?P<id>[^,]+)|(?:'usage': )(?P<usage>[^,]+)" | chart max(usage) over id by Day | where Yesterday!=Today | sort Today It shows Today's bar on the left of Yesterday's bar for each id. I tried to reverse the order, to show Yesterday's bar on the left of Today's bar for each id, but did not find a way to make it work unless I rename the column(s), e.g. rename "Yesterday" to "Before" and rename "Today" to "Now". It appears that the default behavior is to sort in alphabetical order. Is there a better way to do this? Thank you
Query index::dlp | bucket _time span=1d | stats count(EVENT_DESCRIPTION) AS "Count" BY _time,User_Name,EVENT_TYPE,EVENT_DESCRIPTION | stats median(Count) AS "Median" BY _time,EVEN... See more...
Query index::dlp | bucket _time span=1d | stats count(EVENT_DESCRIPTION) AS "Count" BY _time,User_Name,EVENT_TYPE,EVENT_DESCRIPTION | stats median(Count) AS "Median" BY _time,EVENT_TYPE I am trying to calculate the average or median number of DLP events per user per day for each different type of events. I don't think my query is correct as some of the numbers don't make sense. I don't actually want to see the average number for each user, I just want to calculate the statistic for all users. I don't know if that makes sense. For example if there are 12 users and 3 types of events, I want to know on day 1 what the average number of events for each event type would be per user. But the result would only show 3 numbers which are the averages for each event type. So if they results were: Send Mail: 10 Upload: 2 Download:5 I would interpret this as on day 1, each user had an average of 10 send mail events etc. Ideally I would like to calculate this for any time frame.