All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers!! Below is the search where we are comparing the last 3 hours vs 1 week ago data. How can we use dynamic token here? So when they select 2 hours it will compare 2 hours vs last 1 we... See more...
Hello Splunkers!! Below is the search where we are comparing the last 3 hours vs 1 week ago data. How can we use dynamic token here? So when they select 2 hours it will compare 2 hours vs last 1 week ago. How can we use token here in place of -3h  : ((earliest=@m-3h latest=@m) OR (earliest=@m-1w-3h latest=@m-1w)) index=ecomm_sfcc_prod sourcetype=sfcc_logs source="/mnt/webdav/*.log" "Order created successfully" ((earliest=@m-3h latest=@m) OR (earliest=@m-1w-3h latest=@m-1w)) | eval time=date_hour.":".date_minute | eval date=date_month.":".date_mday | chart count by time date
Hello,  Can someone pls guide how to extract a multi value field called "GroupName" from my JSON data via the Field extractor IFX.  The different values are seperated by ",\" as you can see in the ra... See more...
Hello,  Can someone pls guide how to extract a multi value field called "GroupName" from my JSON data via the Field extractor IFX.  The different values are seperated by ",\" as you can see in the raw events.  By default it only extracts the 1st value - . Raw events:   {"LogTimestamp": "Mon May 30 06:27:07 2022",[],"SAMLAttributes": "{\"FirstName\":[\"John\"],\"LastName\":[\"Doe\"],\"Email\":[\"John.doe@mycompany.com\"],\"DepartmentName\":[\"Group1-AVALON\"],\"GroupName\":[\"ZPA_Vendor_Azure_All\",\"Zscaler Proxy Users\",\"NewRelic_FullUser\",\"jira-users\",\"AWS-SSO-lstech-viewonly-users\",\"All Workers\"],\"userAccount\":[\"Full Time\"]     Regex generated by the IFX causes GroupName to have only 1 value: "ZPA_Vendor_Azure_All". I want it to display the other values also such as : Zscaler Proxy Users , NewRelic_FullUser , jira-users , AWS-SSO-lstech-viewonly-users, All Workers   . The end of the different values of GroupName field is just before the "userAccount" field. Hope i am clear
Hi I have table like below how can i show them on map? spl | table city count city  count الریاض 10 20 جدة مکة 33    thanks
Hello, I was trying to find out the correlation among Indexed Fields, Indexed Time Field Extraction, HF/UF, Deployment Server, and Performance. Do we need to have Indexed Time Field Extraction to c... See more...
Hello, I was trying to find out the correlation among Indexed Fields, Indexed Time Field Extraction, HF/UF, Deployment Server, and Performance. Do we need to have Indexed Time Field Extraction to create Indexed Fields? When we have the Indexed Time Field Extraction, do we have to have HF installed there, and does it have to be on deployment server? What would be the computational overload having the Indexed Time Field Extraction in compared to Search Time Field Extraction as SPLUNK highly recommend avoiding Indexed Time Field Extraction? Thank you so much for your thoughts and support in findings this correlation.
Hi everyone, I want to prevent warm buckets from becoming cold, not to disable it since it's mandatory to have coldPath. The reason is that since my Hot/Warm and Cold buckets are all on the same fast... See more...
Hi everyone, I want to prevent warm buckets from becoming cold, not to disable it since it's mandatory to have coldPath. The reason is that since my Hot/Warm and Cold buckets are all on the same fast storage, as well as I also need to define maxVolumeDataSizeMB for coldPath, I want to use up all my storage for homePath as much as possible. Here's an example of what I mean. Total Disk Space: 100GB homePath's maxVolumeDataSizeMB: 90GB coldPath's maxVolumeDataSizeMB: 10GB No frozenPath I want to configure the indexes not to move to Cold buckets as much as possible, so that I can reduce the coldPath configuration to be 1GB only, hence freeing up 9GB of space to allocate for the homePath of my other 30 indexers. From my Monitoring Console, I see that the coldPath is not used much so 9GB of all indexers add up to lots of space that are under-utilized. Based on this stats, I could set to 1GB today but it might suddenly increase one day, which leads to my question above, as I want to set it in a deterministic way. Any advice is appreciated.
  index="np-dockerlogs*" source="*gps-request-processor-dev*" sourcetype= "*eu-central-1*" event="*Request" | fields event category labelType documentType regenerate businessKey businessValue... See more...
  index="np-dockerlogs*" source="*gps-request-processor-dev*" sourcetype= "*eu-central-1*" event="*Request" | fields event category labelType documentType regenerate businessKey businessValue sourceNodeType sourceNodeCode geoCode jobId status sourcetype source traceID processingTime _time | eval LabelType=coalesce(labelType, documentType) | sort _time | table event LabelType sourceNodeCode geoCode status traceID processingTime Above query provide three record for each traceid which indicate for the respective traceid request was received request was success/failed total time taken by the request now from this data i want to produce below type of table   geoCode   sourceNodeCode   LabelType        event         totalreqreceived     successrate      avgProcessingTime EMEA           1067                           Blindilpn     synclabelrequest           1                              100%                     450                                                             taskstart     synclabelrequest           5                                98%                    1500                        1069                          ilpn                synclabelrequest           1                              100%                     420   NA                1068                          NIKE            synclabelrequest             1                              100%                     500                                                            cgrade        synclabelrequest            4                                95%                      2000                                                            NIKE            asynclabelrequest          1                               100%                     350 This table shows the 'total no of request received' , 'there success percentage' and 'average processingtime' for each 'event (either synclabelrequest or asynclabelrequest)'  from a list of 'labelType' belongs to a specific sourceNodeCode and geocode
Here is my situation. I can use subsearch to get two column data, just like below. Data row is not aligned, so I can't simply use eval if to compare. Some of the value is identical, but some is not.... See more...
Here is my situation. I can use subsearch to get two column data, just like below. Data row is not aligned, so I can't simply use eval if to compare. Some of the value is identical, but some is not. I want to output the value existing in col1, but not exist in col 2 column1 column 2 AA            BB CC           AA DD           FF EE            ZZ FF            XX VV          MM
Default range of Overall Service Health Score is: Critical;0-20 , High;20-40 , Medium;40-60 , Low;60-80 , Normal; 80-100 where in service analyzer it will show color: Critical;Red , High;Amber , Medi... See more...
Default range of Overall Service Health Score is: Critical;0-20 , High;20-40 , Medium;40-60 , Low;60-80 , Normal; 80-100 where in service analyzer it will show color: Critical;Red , High;Amber , Medium;Orange , Low;Yellow , Normal;Green. Basically, I want to change the default range where when the score is 89, it will be Low severity instead of Normal and the service node will show Yellow color instead of Green. Is it possible to change the default range?
Hi guys. Question: what's the best "maxKBps" settings in such Environment? 1Gbit LAN About 2000 Forwarders 6 Indexers I know, the correct answer does not exist, since may vary from server to ser... See more...
Hi guys. Question: what's the best "maxKBps" settings in such Environment? 1Gbit LAN About 2000 Forwarders 6 Indexers I know, the correct answer does not exist, since may vary from server to server, and from Env to Env, but there's surely a Best Practice to set this fundamental value, right? So, from months to now on i stay well with 0 value (no bandwidth limit), but sometimes i get a real Indexers stress while people load many many GB of logs (more than 1TB, for pregress analisyes), since Indexers receive so many datas to saturate their resources, so i need to force a maxKBps to 10240 ONLY for some servers to stay well. Now, is a 10240 value a right compromise for *ALL* Forwarders, maybe, to raise the value succesively after?
Hi, I have an use case that I need to forward the logs to TCP listener (third party system) as and when the logs load to splunk. For the, I am loading windows security logs to splunk from windows... See more...
Hi, I have an use case that I need to forward the logs to TCP listener (third party system) as and when the logs load to splunk. For the, I am loading windows security logs to splunk from windows machine using Splunk Forward service. Then configured splunk forwarder to forward raw logs to TCP listner as below. ================= props.conf =============== [ tcp:9080] TRUNCATE = 0 [default] # unless a more specific stanza clears the value of this class, the transform will be run TRANSFORMS-selectiveIndexing = selectiveIndexing [WinEventLog:Security] TRANSFORMS-routing=transforms-windows-security-logs # note the empty list of transforms to run in this class, overridden from the [default] TRANSFORMS-selectiveIndexing = =========================================   ================ transforms.conf =========== [transforms-windows-security-logs] REGEX = . DEST_KEY = _TCP_ROUTING FORMAT = windows-security-routing =========================================   ================ output.conf =============== [indexAndForward] index = true [tcpout] defaultGroup=everythingElseGroup [tcpout:windows-security-routing] server=xx.xx.xxx.xxx:9522 sendCookedData=false ======================================== Now, I have Logstash listener which is listening on port 9522 and writing the data to file. I see the logs being written to file as below =============== Windows security logs forwarded to third party system ========= {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"LogName=Security","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"EventType=0","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"SourceName=Microsoft Windows security auditing.","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"RecordNumber=24054056","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"TaskCategory=Process Termination","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"Message=A process has exited.\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"Subject:\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"\tAccount Name:\t\tBDELYSYS07$\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"\tLogon ID:\t\t0x3E7\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"Process Information:\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"\tProcess Name:\tC:\\Program Files\\SplunkUniversalForwarder\\bin\\splunk-winevtlog.exe\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.680Z","@version":"1","port":33332,"message":"05/23/2022 08:51:21 PM","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"EventCode=4689","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"ComputerName=BDELYSYS07.bdelysium.internal","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"Type=Information","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"Keywords=Audit Success","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"OpCode=Info","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"\tSecurity ID:\t\tS-1-5-18\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"\tAccount Domain:\t\tBDELYSIUM\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"\tProcess ID:\t0x2798\r","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"\tExit Status:\t0x0","host":"xx.xx.xxx.xxx"} {"@timestamp":"2022-05-24T10:15:57.681Z","@version":"1","port":33332,"message":"LogName=Security","host":"xx.xx.xxx.xxx"} ================================================= Going through the logs, it seems like logs are not forwarded event by event.  I would like to know with the above setup, how can I forward the logs to TCP listener event by event. Also, is there a way to forward the splunk parsed data logs in JSON format to TCP port.
Hello, I have huge volume of data coming in under different source types (or indexes) for different applications/projects. Most of the cases ACCOUNTID and IPAddress are the unique fields for each o... See more...
Hello, I have huge volume of data coming in under different source types (or indexes) for different applications/projects. Most of the cases ACCOUNTID and IPAddress are the unique fields for each of the applications/Projects. I require to perform Real Time search over a wide range/period of time (30 days to All Time). How would I optimize these search criteria in Real Time? Any thoughts or recommendations would be highly appreciated. Thank you so much.
Hello Team, I am interested in determining the best way to count the number of case sensitive letters and special characters for each value. Examples: - PoWERshell  = 4 uppercase and 6 lowercas... See more...
Hello Team, I am interested in determining the best way to count the number of case sensitive letters and special characters for each value. Examples: - PoWERshell  = 4 uppercase and 6 lowercase and 0 special characters - Powershell = 1 uppercase and 9 lowercase and 0 special characters - Power`SHell = 3 uppercase and 7 lowercase and 1 special charater For each value in the same field, is it possible to count this and create a field value pair for it? The desired table would be the following fields: (Original Feild value) (count of uppercase letters) (count of lower case letters)  (special characters count) Example output: Power`Shell --- 2--- 8---1    
Hello,  I am trying to figure out how to rex extract from text that starts with a newline and ends with a newline.  For example:  \\nCAR PRODUCT: bat mobile\n Does anyone know a good way around t... See more...
Hello,  I am trying to figure out how to rex extract from text that starts with a newline and ends with a newline.  For example:  \\nCAR PRODUCT: bat mobile\n Does anyone know a good way around this situation so that only "bat mobile" is extracted?   Thank you for your help. Spencer
hi Expert,  i have a question for this issue. What methods are you used to detect malware? Does it have anything to do with SVM or machine learning? Please help me answer this question. Thanks and ... See more...
hi Expert,  i have a question for this issue. What methods are you used to detect malware? Does it have anything to do with SVM or machine learning? Please help me answer this question. Thanks and best regards.
Hi Everyone,  First time using Splunk Community. I have been working with Splunk for about a year and I've been doing okay but I'm trying to use Active Directory logs to identify when accounts are c... See more...
Hi Everyone,  First time using Splunk Community. I have been working with Splunk for about a year and I've been doing okay but I'm trying to use Active Directory logs to identify when accounts are created. I was looking for ways to do this. I tried using userAccountControl or pwdLastSet=0 but what I thought was a sure thing was to use uSNCreated=uSNChanged. But when I add that to the search, I get no result even though I can see that the original creation record has the same value for both.    Any suggestions are greatly appreciated. Thank you!
Hi Splunkers, I am stuck at how can I get counts for Yesterday and Last week. so ask is when select relative time from timer(in Dashboard) it should give me counts for yesterday in one panel and in... See more...
Hi Splunkers, I am stuck at how can I get counts for Yesterday and Last week. so ask is when select relative time from timer(in Dashboard) it should give me counts for yesterday in one panel and in another panel for last week.  For Example, 1) I am searching for 9pm to 10pm in my Dashboard so I want to setup a query that gives me same time data but yesterday's 9pm to 10pm (Query for Yesterday) 2) If I run same data then other panel should give me counts for last week at same time (Query for last Week) so I am looking for two separate queries for both. Basic Query:- index::name type=sample_events "service"="auth" "successReason"=VALID | stats count  
Hello! I'm working on a project where we would like to use Splunk Synthetic Monitoring to improve our monitoring of several web applications, but I'm running into issues where waiting for elements to... See more...
Hello! I'm working on a project where we would like to use Splunk Synthetic Monitoring to improve our monitoring of several web applications, but I'm running into issues where waiting for elements to be present is causing timeouts. Is there any way to set a custom wait time on a step? And if that way involves executing Javascript, what are the best practices for defining the custom wait times? Apologies if I'm posting this in the incorrect location!
If I run the below search the statistics output changes while the search is progressing and when the search is completed not all results that was seen during the search is visible.  | tstats summar... See more...
If I run the below search the statistics output changes while the search is progressing and when the search is completed not all results that was seen during the search is visible.  | tstats summariesonly=f count from datamodel=Network_Traffic by All_Traffic.app,All_Traffic.src,All_Traffic.dest,All_Traffic.dest_port,All_Traffic.action | lookup xxxx CIDR AS All_Traffic.src OUTPUT CIDR as cidr_ip2 fullzonename as sourcenetwork ZoneType as sourcezonetype | lookup xxxx CIDR AS All_Traffic.dest OUTPUT fullzonename as destinationnetwork ZoneType as destinationzonetype | where (cidrmatch(cidr_ip2,'All_Traffic.src')) | search sourcezonetype="yyyy" AND destinationzonetype!="*zzzz" | stats count by sourcenetwork sourcezonetype destinationnetwork destinationzonetype All_Traffic.action
I have a log from an application that isn't structured in any standard format and I am struggling with dropping certain lines at index time due to the line merging configuration. This is a pseudo sa... See more...
I have a log from an application that isn't structured in any standard format and I am struggling with dropping certain lines at index time due to the line merging configuration. This is a pseudo sample of the data: ----- <application version> ----- (<timestamp>) <data> (<timestamp>) <data> ----- <application version> ----- (<timestamp>) <data> (<timestamp>) <data> ----- <application version> ----- (<timestamp>) <data> (<timestamp>) <data> <data> <data> <data> (<timestamp>) <data> As you can see, for some events the message is broken down into multiple lines, so the best way to break events would be by the timestamp, so this is the props.conf I wrote for this source type: [my_new_sourcetype] SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true TRANSFORMS-drop_header = new_sourcetype_drop_header  And the associated transforms.conf: [new_sourcetype_drop_header] REGEX = ^-{5}.+-{5}$ DEST_KEY = queue FORMAT = nullQueue   The issue becomes that when the data is indexed, any event that would have been the <application version> header by itself is dropped, but then there are events with a linecount of 2 that look like: (<timestamp>) <data> ----- <application version> -----   How do I force it so that the <application version> header is always made into its own event so that it can be dropped by the transforms configuration?
Hello. Recently I've joined a new company that is using splunk as their siem and this past month I've being trying to learn a bit about the tool since I'm completely new to it. I was assigned as an... See more...
Hello. Recently I've joined a new company that is using splunk as their siem and this past month I've being trying to learn a bit about the tool since I'm completely new to it. I was assigned as an exercise to work out a query to basically do this 2 things: identify potential policies with all ports enabled identify which of these policies are recieving petitions from public IP addresses So far I've come up with this query:     index="sourcedb" sourcetype=fgt_traffic host="<external firewall ip>" action!=blocked | eventstats dc(dest_port) as ports by policyid | stats count by policyid ports | eval source_ip=if(cidrmatch("10.0.0.0/8", src) OR cidrmatch("192.168.0.0/16", src) OR cidrmatch("172.16.0.0/12", src),"private","public") | where source_ip="public"      Basically, the main problem I'm having and can't seem to find a reasonable solution is that I've already managed to find out how to filter private IP addresses from the results but I feel like my eventstats sentence is not working properly, mainly because I'm counting all the distinct destination ports but not by the policyid. I'd be really grateful if you guys could give me a hint or an advice about how I can aproach this case. Thanks in advance