All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a requirement to fetch stats count from raw data logs. Sharing you the query and results. Query : index="bw6_stg" sourcetype="HYD01"| rex field=_raw "ApplicationName:\s+\[(?P<Applname>.*)];" ... See more...
I have a requirement to fetch stats count from raw data logs. Sharing you the query and results. Query : index="bw6_stg" sourcetype="HYD01"| rex field=_raw "ApplicationName:\s+\[(?P<Applname>.*)];" | stats count by Applname For the above query below are the results. Applname       count abcd                     5 abcd.app            6 efgh                     4 efgh.app            3 Now I want to add 'abcd' count and 'abcd.app' count (5+6), it should show total=11 Same as above 'efgh' count and 'efgh.app' count (4+3); total=7.  I need to build query for the above total, can anyone guide me on this.
After Upgrading to SSE Version 3.2.2, we are getting the following error while loading the MITRE Map. Error in 'lookup' command: Could not find all of the specified lookup fields in the lookup tabl... See more...
After Upgrading to SSE Version 3.2.2, we are getting the following error while loading the MITRE Map. Error in 'lookup' command: Could not find all of the specified lookup fields in the lookup table.       Has anyone come across this while using the App? Thanks, ~ Abhi
We've recently seen a significant spike in memory utilization on our search heads ... Looking at the files opened by mongod I'm seeing info like this: -rw------- 1 root root 16777216 Dec 7 16:37 s_S... See more...
We've recently seen a significant spike in memory utilization on our search heads ... Looking at the files opened by mongod I'm seeing info like this: -rw------- 1 root root 16777216 Dec 7 16:37 s_SA-AccyUyi@longstring@.0 -rw------- 1 root root 33554432 Sep 15 20:58 s_SA-AccyUyi@longstring@.1 -rw------- 1 root root 536608768 Nov 16 02:36 s_SA-AccyUyi@longstring@.10 -rw------- 1 root root 536608768 Sep 19 23:28 s_SA-AccyUyi@longstring@.11 -rw------- 1 root root 536608768 Dec 7 16:38 s_SA-AccyUyi@longstring@.12 -rw------- 1 root root 67108864 Sep 15 20:58 s_SA-AccyUyi@longstring@.2 -rw------- 1 root root 134217728 Dec 7 16:29 s_SA-AccyUyi@longstring@.3 -rw------- 1 root root 268435456 Dec 7 16:38 s_SA-AccyUyi@longstring@.4 -rw------- 1 root root 536608768 Sep 19 23:28 s_SA-AccyUyi@longstring@.5 -rw------- 1 root root 536608768 Dec 7 16:38 s_SA-AccyUyi@longstring@.6 -rw------- 1 root root 536608768 Dec 7 16:38 s_SA-AccyUyi@longstring@.7 -rw------- 1 root root 536608768 Sep 19 23:32 s_SA-AccyUyi@longstring@.8 -rw------- 1 root root 536608768 Sep 19 23:31 s_SA-AccyUyi@longstring@.9  Any idea why there are so many versions of some of these? ... shouldn't there typically only be a ".0" and a ".ns" file for each collection?   Thank you  
Hi everyone, I need some help with extracting the field 'message' from my logs coming to splunk. Right now, I am able to see this field coming in as : message=job py process completed successfully ... See more...
Hi everyone, I need some help with extracting the field 'message' from my logs coming to splunk. Right now, I am able to see this field coming in as : message=job py process completed successfully  When I extract this field, message, only 'job' is coming through. I am assuming this is because splunk can only read the first word, since they are all being seperated by spaces. Any way that I can fix this through Splunk or is this something I need to fix when formatting my logs through my application code? 
I updated my Ruby app to use signalfx 3.1.0 (from 2.1.0)   I was surprised to see this gem downgrade when I bundled:   Installing i18n 1.1.0 (was 1.8.5) That's rewinding the i8n gem about two y... See more...
I updated my Ruby app to use signalfx 3.1.0 (from 2.1.0)   I was surprised to see this gem downgrade when I bundled:   Installing i18n 1.1.0 (was 1.8.5) That's rewinding the i8n gem about two years.  Looking at signalfx.gemspec, it is locking it in to exactly that version:   spec.add_dependency "i18n", "= 1.1.0" Is there a reason for that? It seems an optimistic version such as ">= 1.1.0" would be safe and not so restrictive, and would match most of the other dependencies in the gemspec. While this is not a blocker for our app, please consider putting in a feature request to relax the i18n version dependency.
I have users getting the "maximum disk usage quota has been reached" message and from other questions and answers I see I need to increase the srchDiskQuota setting in the authorize.conf file. I hav... See more...
I have users getting the "maximum disk usage quota has been reached" message and from other questions and answers I see I need to increase the srchDiskQuota setting in the authorize.conf file. I have a SHC and when I look for the authorize.conf file I see it in /opt/splunk/etc/system/default/authorize.conf - if I modify the file in that directory and then push out to my SHs, do I need to worry about the /opt/splunk/etc/system/default/authorize.conf  being overwritten when I update Splunk in the future? 
I have a Splunk Webhook that calls a Rest URL and I would like to pass a value (From Search results)  as part of Alert Action > URL, Can you please help if you have tried so..
Can Splunk Web in a Splunk Enterprise solution be hosted on a server other than the Splunk Search Head? We are currently hosting Splunk Enterprise 8.0.1 on two servers: Search/Indexer and Heavy Forwar... See more...
Can Splunk Web in a Splunk Enterprise solution be hosted on a server other than the Splunk Search Head? We are currently hosting Splunk Enterprise 8.0.1 on two servers: Search/Indexer and Heavy Forwarder. Splunk Web is running on the Search/Indexer and is integrated with our corporate LDAP system for user logon authentication. We recently acquired a third server which has twice the processing power and RAM than our current Search/Indexer so we are moving the Search/Indexer to that server and will use the old server as a second forwarder. The new server is not inside our corporate domain so LDAP integration will be a challenge. Can the Splunk Web service be hosted on a server other than the Splunk Search Head so we can host Splunk Web on a server within our corporate domain? If so, what are the resource requirements for the Splunk Web host?
We are pulling in DNS debug logs from windows servers and I have a few servers that have been running for awhile, but I have we are now adding inputs to pull in the event logs now. After pushing out ... See more...
We are pulling in DNS debug logs from windows servers and I have a few servers that have been running for awhile, but I have we are now adding inputs to pull in the event logs now. After pushing out the new inputs to the UFs, I noticed that the log files must have data starting around March of this year. At the rate it is ingesting we won't ever catch up and I don't need to be pulling in that old data. We are using the "MonitorNoHandle" within the inputs to do so, but from my research I can't find any switches that will allow me to start collecting the "new" events only going forward. I found that the windows monitors has the "start_from" parameter, but that does not seem to work/apply to the MonitorNoHandle stanza from what I can tell. Are there options I am missing that would do this?
Hi, I am looking at the Palo Alto add-on from https://splunkbase.splunk.com/app/2757/ and specifically to logs with sourcetype pan:userid All the logs get the username unknown, when digging into th... See more...
Hi, I am looking at the Palo Alto add-on from https://splunkbase.splunk.com/app/2757/ and specifically to logs with sourcetype pan:userid All the logs get the username unknown, when digging into this  I see this in the prop.conf [pan:userid] SHOULD_LINEMERGE = false TIME_PREFIX = ^(?:[^,]*,){6} MAX_TIMESTAMP_LOOKAHEAD = 32 REPORT-search = extract_userid FIELDALIAS-virtual_system = vsys as virtual_system FIELDALIAS-src_for_pan_correlation = src_ip as src FIELDALIAS-dest_ip_for_pan_correlation = src_ip as dest_ip FIELDALIAS-client_ip = src_ip as client_ip FIELDALIAS-dest_for_pan_correlation = src_ip as dest FIELDALIAS-dvc_for_pan_correlation = host as dvc EVAL-user = coalesce(src_user,"unknown")   and in the transforms i find : [pan_userid] DEST_KEY = MetaData:Sourcetype REGEX = ^[^,]+,[^,]+,[^,]+,USERID, FORMAT = sourcetype::pan:userid   [extract_userid] DELIMS = "," FIELDS = "future_use1","receive_time","serial_number","type","log_subtype","version","generated_time","vsys","src_ip","source_name","event_id","repeat_count","timeout_threshold","src_port","dest_port","source","source_type","sequence_number","action_flags","devicegroup_level1","devicegroup_level2","devicegroup_level3","devicegroup_level4","vsys_name","dvc_name","vsys_id","factor_type","factor_completion_time","factor_number"   One thing I notice is that there is no src_user in the fields list in the [extract_userid] so I probably miss something here but my conclusion is that the field will never be filled.  So does anyone have an idea how to get user field filled with username?   Just for reference a log, that should fit here, and it does partially.    <14>1 2020-12-07T15:14:29+01:00 Servername-XX - - - - 1,2020/12/07 15:14:29,000101011111,USERID,logout,223,2020/12/07 15:14:29,vsys,10.10.10.11,client\usr.name,client-loc-id,0,1,0,0,0,agent,,1111111111111111114,0x0,0,0,0,0,,Servername-XX,0,,2020/12/07 15:14:29,1,0x0,client\user.name  
Hi Actually we are forwarding  data from 2 forwarders servers to the indexer server, from one  forwarder server we are receiving the data in indexer server and in search head also we can see data bu... See more...
Hi Actually we are forwarding  data from 2 forwarders servers to the indexer server, from one  forwarder server we are receiving the data in indexer server and in search head also we can see data but we are not receiving the data in indexer server from other forwarder server. Even in the forwarder server logs we can see it is connected to indexer but logs are not getting forward to the indexer server.   We can see below logs in splunkd  as below   12-07-2020 12:38:07.059 +0100 INFO  TcpOutputProc - Connected to idx=xx.xx.xx.xx:9998, pset=0, reuse=0. 12-07-2020 12:38:07.091 +0100 INFO  WatchedFile - Will begin reading at offset=20396720 for file='E:\Apps\SplunkUniversalForwarder\var\log\splunk\metrics.log'. 12-07-2020 12:38:07.106 +0100 INFO  WatchedFile - File too small to check seekcrc, probably truncated.  Will re-read entire file='E:\Apps\SplunkUniversalForwarder\var\log\splunk\license_usage.log'. 12-07-2020 12:38:07.153 +0100 INFO  WatchedFile - File too small to check seekcrc, probably truncated.  Will re-read entire file='E:\Apps\SplunkUniversalForwarder\var\log\splunk\remote_searches.log'.   Below log  can see in helat.log TCPOutAutoLB-0 - More than 70% of forwarding destinations have failed   Outputs.conf file [tcpout] defaultGroup = lb [tcpout:lb] server =xxx.xxx.com:9998 autoLB = true
Hi All,   i'm trying to compare row values . my table is like    App           label                   env         space mini1       jenkins-a21        p1          1290            mini2       ... See more...
Hi All,   i'm trying to compare row values . my table is like    App           label                   env         space mini1       jenkins-a21        p1          1290            mini2       jenkins-a22       p1            1687 mini2      jenkins-a21         p2          1290 mini3       jenkins-a23         p2           1598 mini4      jenkins-a24          p1           1687 mini3       jenkins-b23          p1           1598   output should be like  App           label                      env           space          Result mini1       jenkins-a21          p1            1290       matched       (comparing label values for p1 and p2) mini2       jenkins-a21          p2           1290        matched       (comparing label value for p1 and p2) mini3       jenkins-a23         p2           1598         not matched (comparing  label value for p1 and p2 mini3       jenkins-b23          p1           1598        not matched   (comparing label label for p1 and p2)   @woodcock 
Hi, I'm configuring the action rule when an alert is raised on ITSI and I'd need to add the time in the message or the subject of the email sent. I've seen the following fields: $result.start_time... See more...
Hi, I'm configuring the action rule when an alert is raised on ITSI and I'd need to add the time in the message or the subject of the email sent. I've seen the following fields: $result.start_time$ $result.last_time$ but the time appears in epoch, something like this: 1607006402.725 I've seen that on the $result.orig_raw$ field the beginning is the time when the event start, but there is so much information after. (Example: "result.orig_raw=12-04-2020 09:30:02 KPI alert..." Would it be possible to extract the time in a format like "dd-mm-yyyy hh:mm:ss" with a different field? If not, could I extract from the orig_raw the beginning of the text? Or transform any of the start_time or last_time values to that format? Cheers!
I am running 2 different Index and have to compare each value in field 1 from 1st index with the values in field2 from index 2 . & also regex is used for other field value. The display result should ... See more...
I am running 2 different Index and have to compare each value in field 1 from 1st index with the values in field2 from index 2 . & also regex is used for other field value. The display result should show a match or a Non Match against each value.        Given Data: (index=cmi cef_vendor="Imperva Inc." cef_product="WAF" dvc="10.124.1.202" act="None" cs2="*" deviceSeverity=High) OR (index=case_management DeviceProduct=WAF fname IN ("*CMI - WAF*")) | rex field=fname "(-)(?(\s)(PROD|SFR)+(\s))(-)(?(\s)[\w]+(\s)[\w]+(\s))(?(\d)+(\s))(-)" | eval m=coalesce(cn1,alert) | stats values(cn1) as cn1 values(alert) as alert by m | table cn1 alert m   Results should be something like this table: cn1                  alert             m 453626     453626      Match 453624     453626     No Match @elrich11 
Hi there, I'm pretty new to Splunk, but have got a fortigate set up to send all logs to Splunk. Simply looking to find all attacks that Fortigate identifies that are successful. Fortigate is be... See more...
Hi there, I'm pretty new to Splunk, but have got a fortigate set up to send all logs to Splunk. Simply looking to find all attacks that Fortigate identifies that are successful. Fortigate is being managed by another team. I've got index = "*" AND tag = "attack" AND action = "allowed", but that seems to be way too simple. Or am I hugely overthinking this?
With this search index=useradmin sourcetype=role_capabilities | eval capabilities=replace(capabilities,"\s",",") | makemv delim="," capabilities | table role capabilities I expected a result lik... See more...
With this search index=useradmin sourcetype=role_capabilities | eval capabilities=replace(capabilities,"\s",",") | makemv delim="," capabilities | table role capabilities I expected a result like role1 capability1 role1 capabiltity2 role1 capabitity3 role 2 capability1 instead I get role 1 capabilty1             capabilty2             capabilty3 role 2 capability1 Probably my expectations of makemv are not correct but I can't find another command to make this work. The reason I want it in this way is to get layout of the print of the dashboard I use this with properly.    
Hi, since the ~20th of October 2020 Apple sells and deliver several Mac Machines (Mac Mini, MacBook Air and MacBook Pro) with Apple Silicon Chip M1. Laptops are typically not a main target for thin... See more...
Hi, since the ~20th of October 2020 Apple sells and deliver several Mac Machines (Mac Mini, MacBook Air and MacBook Pro) with Apple Silicon Chip M1. Laptops are typically not a main target for things like Splunk Monitoring. But the Mac Mini could be an interesting device for Small Offices (e.g. as SH or HFw) and AdHoc analyses.  At the moment Splunk is just downloadable for MacOS on Intel. Is there a Timeline when Splunk Enterprise and Universal Forwarder will ported to the new Apple Chips? Kind Regards Dirk M.
How to calculate the memory used percentage for windows servers. Any suggestions?  
Hi, I want to know the volume of data present in my each application present in search head. I have the query to calculate volume according to index but i want to know the volume of data to be calc... See more...
Hi, I want to know the volume of data present in my each application present in search head. I have the query to calculate volume according to index but i want to know the volume of data to be calculated according to application.   Any help is appreciated!