All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can anyone please help me to create the regex expression for the below log.  > {\\n \\\"process\\\": \\\"get_input\\\",\\n \\\"totalProcessed\\\": \\\"0\\\",\\n \\\"SuccessfullyProcessed\\\": \\\"0\... See more...
Can anyone please help me to create the regex expression for the below log.  > {\\n \\\"process\\\": \\\"get_input\\\",\\n \\\"totalProcessed\\\": \\\"0\\\",\\n \\\"SuccessfullyProcessed\\\": \\\"0\\\",\\n \\\"FailedToProcess\\\": \\\"0\\\",\\n \\\"FileName\\\": \\\"\\\"\\n} I created the regex for this as below, but for the 'FileName' I am getting '\n'.   > | rex field=_raw "process\W+(?<process>[\w\s]+)" | rex field=_raw "totalProcessed\W+(?<totalProcessed>[\w\s]+)"| rex field=_raw "SuccessfullyProcessed\W+(?<SuccessfullyProcessed>[\w\s]+)" | rex field=_raw "FileName\W+(?<FileName>[\w\s]+)" | rex field=_raw "FailedToProcess\W+(?<FailedToProcess>[\w\s]+)" It seems some modification/rebuild the regex is needed.  Please help me on this.    Thanks in advance.
Am trying to find if the FWS are using the default user name + default password of changeme. Appreciate your time in advance. 
Hi to all, is wanted or is a bug that dashboard made with dashboard studio are not visible in the navigation menu? This is the cose inside default.xml:   <collection label="DStudio"> <collecti... See more...
Hi to all, is wanted or is a bug that dashboard made with dashboard studio are not visible in the navigation menu? This is the cose inside default.xml:   <collection label="DStudio"> <collection label="HFWD"> <view source="unclassified" name="hfwd_data_collection" /> </collection> </collection>   Instead to see my dashboard made with dashboard studio i see all other dashboards. Thanks for help
Hello,  We are using Splunk Cloud and frequently we get updates of our instance, it's fine so we are always up-to-date. The issue is that we have no warning before it happens and the update sometime... See more...
Hello,  We are using Splunk Cloud and frequently we get updates of our instance, it's fine so we are always up-to-date. The issue is that we have no warning before it happens and the update sometimes has some impacts that we only discover later. Questions: -How to know when an update will happen? -How can we get an history of updates of our instances. This search gives me the last restart of the instance and the version, but it doesn't necessary mean that the update was done at that time. | rest /services/server/info | eval LastStartupTime=strftime(startup_time, "%Y/%m/%d %H:%M:%S") | table LastStartupTime , version   Thanks for your help
Need help hardening Splunk with the following brothers / sisters Thank u in advance. Where do I enable Indexer Acknowledgement. To ensure delivery of data from FWs to Indexers. When enabled the FW w... See more...
Need help hardening Splunk with the following brothers / sisters Thank u in advance. Where do I enable Indexer Acknowledgement. To ensure delivery of data from FWs to Indexers. When enabled the FW will send any data not acknowledged as received by the Indexer. Where do I enable Event & data block signing? To meet regulatory requirement. How do I ensure if Audit events & archives are cryptographically signed. To help detect any modifications or tampering of underlying data. I appreciate your help in advance    
Im working on extracting Source Network Address's from Splunk I've spent the past few hours defining my query and after a few days of researching and troubleshooting got it narrowed to the following.... See more...
Im working on extracting Source Network Address's from Splunk I've spent the past few hours defining my query and after a few days of researching and troubleshooting got it narrowed to the following. The problem is the Source_Network_Address in windows event logs appears without spaces and the query is pulling data back that is not accurate for me. Im looking for Public IP's RDPing to a host not private IPs. index=windows EventCode=4625  Source_Network_Address!="127.0.0.1" Source_Network_Address!="::1" | eventstats count as "EventCount" by EventCode | table EventCode EventCodeDescription EventCount Source_Network_Address ComputerName | sort EventCode | where EventCount>80 Yes I've tried excluding internal subnets however this is still not giving me expected output. I need a way to extract Source Network Address without spaces. https://community.splunk.com/t5/Splunk-Search/Need-to-pull-IP-from-Message-field/m-p/559816 I tried this however we are not extracting it via the IP Field. When I go to extract the regex after searching by event count and index the field gets cut off in the regex editor that loads up. Not sure how to proceed here.  
When I create a role and assign it to a user in Splunk Enterprise, I have successfully tested that the user can only see events/data from the indexes specifically selected for that role.   However,... See more...
When I create a role and assign it to a user in Splunk Enterprise, I have successfully tested that the user can only see events/data from the indexes specifically selected for that role.   However, when logged in as that user, in Splunk Enterprise Security, when accessing the "Security Posture" dashboard, for example, it appears the role restrictions given in Splunk Enterprise do not carry over to enterprise security.  On the Security Posture dashboard, the user I want to limit access of data to can see everything.  This is because there are no restrictions in place on the "es_notable_events" source in ES, for example.  I would like to put a restriction in place so the logged in user can only see notable events from the indexes the user is restricted to in Splunk Enterprise, hence the only data the user can see in the specifically selected indexes, and nothing more. A restricted user and a Splunk admin has the same visibility to all data on the Security Posture dashboard (and all other applicable dashboards and displays as well). Is there a way to limit visibility in Enterprise Security (notable events and such) to only data from the indexes the restricted user has access to in Splunk Enterprise?   It seems that the role restrictions put in place in Splunk Enterprise do not carry over easily to Enterprise Security.  I can create a bunch of customized dashboards and reports with queries filtering on hostname and/or IP range only, but this is a lot of work for something where an option may just need to be set, or an extra parameter added to the role somewhere.   Maybe Splunk can make this more easy to manage in a future release of Enterprise Security?
Please show mw how & where do I find the SSL encryption "enabled or not" for If SSL encryption communication enabled between Splunk Search heads, Indexers, FWS If SSL encryption enabled from browse... See more...
Please show mw how & where do I find the SSL encryption "enabled or not" for If SSL encryption communication enabled between Splunk Search heads, Indexers, FWS If SSL encryption enabled from browser to Splunk Web (on SHs) If SSL enabled between FWs to Indexers   I really appreciate your help in advance. Thanks
I have 2 indexies: one with business events [main], another with server performance metrics [metrics]. Say, in [main] I have information about long-running processes with fields Name, Host, StartTim... See more...
I have 2 indexies: one with business events [main], another with server performance metrics [metrics]. Say, in [main] I have information about long-running processes with fields Name, Host, StartTime, EndTime, .... How to display mean CPU utilization during process execution? I need something like ... Host=$host$ | eval cpu = [ mstats avg(win_cpu.Percent_Processor_Time_mean) as psCpu where index=metrics  host=$host$ AND starttime=StartTime AND endtime=EndTime | return $psCpu] | table Name cpu Unfortunately this approach isn't works since subquery knows nothing about main query result fields and I got "Unable to parse StartTime with format ..." May be it's possible somehow to make main query by [metrics] and filter/group by time intervals [StartTime;EndTime] from subquery results from [main]?
Hi Folks, My test data are like : DOC_ID,PROCESS_ID,RECEIVER DOC_10,PROC_A100,REC_0001 DOC_10,PROC_A100,REC_0002 DOC_20,PROC_A100,REC_0001 DOC_30,PROC_A100,REC_0001 DOC_50,PROC_A200,REC_0001 ... See more...
Hi Folks, My test data are like : DOC_ID,PROCESS_ID,RECEIVER DOC_10,PROC_A100,REC_0001 DOC_10,PROC_A100,REC_0002 DOC_20,PROC_A100,REC_0001 DOC_30,PROC_A100,REC_0001 DOC_50,PROC_A200,REC_0001 DOC_60,PROC_A200,REC_0001   stats count by PROCESS_ID,RECEIVER  : PROCESS_ID,RECEIVER,count PROC_A100,REC_0001,3 PROC_A100,REC_0002,1 PROC_A200,REC_0001,2   I would like to append the total of distinct DOC_ID for each PROCESS_ID : TOTAL_OF_DOCS first line is 3, because PROC_A100 has DOC_10+DOC_20+DOC_30 TOTAL_OF_DOCS second line is 2, because PROC_A200 has DOC_50+DOC_60   Any hints are welcome. With kind regards
What could be reason that there are no data available after grouping using a transaction command? Before grouping using a transaction, there are data available.
Hi, in anything else this would seem very simple but I seem to be flummoxed trying to do this in splunk. Probably not helped by having zero regex knowledge. I have a field that has values in the fo... See more...
Hi, in anything else this would seem very simple but I seem to be flummoxed trying to do this in splunk. Probably not helped by having zero regex knowledge. I have a field that has values in the format:  AAAABBCC I want to return all values that have BB in position 5, if anyone could be so kind as to  provide a sample I can then pull it apart and try and work out how it does it. Thanks.
Hi, We want to monitor disk partition in AppDynamics. Is it possible to monitor? Thanks Dinesh 
Hello, I have an issue with the security of the Splunk installation. Actually it is not about Splunk itself - after each security audit in my company, the OS user Splunk gets turned into the no shel... See more...
Hello, I have an issue with the security of the Splunk installation. Actually it is not about Splunk itself - after each security audit in my company, the OS user Splunk gets turned into the no shell, which means I am not able to switch to it using the su command.  Is it possible at all to run the Splunk instance on Linux without the logon access to the "splunk" OS user? Surely one could turn the Splunk into the service version, but when I think about creating/changing the configuration files .. they all have to have proper access rights - the easiest is to do it as a "splunk". When operating it as a root, sooner or later there will be an issues with that I would say. Or is my understanding completely wrong? Kind Regards, Kamil 
Hi, When using iplocation to get the Country list ,maximum i am getting null values for Country. How to get the exact country for the ip.   Regards, Madhusri R
Example  i have a csv where the date is like this in the date field Billing Start= 43774.7083333 But when i format the cell manually i see the actual date is 11/5/2019  5:00:00 PM which is correct.... See more...
Example  i have a csv where the date is like this in the date field Billing Start= 43774.7083333 But when i format the cell manually i see the actual date is 11/5/2019  5:00:00 PM which is correct.  And I am using below query but not working. Can you please help to get the above correct date.  eval date=strftime('Billing Start',"%M/%d/%Y %H:%M:%S %p") It gives me below output 43774.7083333 09/01/1970 13:09:34 PM
Hi, I have written the below search query based on some prometheus metrics being onboarded:   index=lab_openshift_prometheus sourcetype=openshift_prometheus metric_name=ceph_cluster_total_bytes | ... See more...
Hi, I have written the below search query based on some prometheus metrics being onboarded:   index=lab_openshift_prometheus sourcetype=openshift_prometheus metric_name=ceph_cluster_total_bytes | eval ceph_cluster_total_bytes_decimal = round(v,0) | append [ search index=lab_openshift_prometheus sourcetype=openshift_prometheus metric_name=ceph_cluster_total_used_bytes | eval ceph_cluster_total_used_bytes_decimal = round(v,0) ] | eval aaa = ceph_cluster_total_bytes_decimal - ceph_cluster_total_used_bytes_decimal / ceph_cluster_total_bytes_decimal | table aaa     Basically what I want to do is: convert each metric's V field (value) from scientific notation to decimal (rounding to 2 decimal places) Do some arithmetic on the new decimal values and create a new field based on the result I am able to create the new decimal value fields but when I do the arithmetic on them, the new aaa field does not contain any data: Can anyone help me with what I am doing wrong? Thanks in advance!  
Hello, I am having an issue with IPLOCATION displaying the wrong Country using the following query.   index="office365" sourcetype = o365* Workload=AzureActiveDirectory Operation=UserLoggedin Acto... See more...
Hello, I am having an issue with IPLOCATION displaying the wrong Country using the following query.   index="office365" sourcetype = o365* Workload=AzureActiveDirectory Operation=UserLoggedin ActorIpAddress=152.37.xxx.xxx | iplocation ActorIpAddress |table Country Which shows the country is "United States" Checked the web on different IP locators and all show the IP as UK which is the correct location.   If I run this query: | makeresults | eval ip="152.37.xxx.xxx" | iplocation ip | table Country, ip The country display as the UK.   Anyone know what is causing this issue. I have updated the mmdb file to the latest release.   TIA      
Hi! I have a log that looks more or less like this:   'H 16-Sep-2021 10:57:03.084; 0:< Jrn.Directive "WindowSize" _ , "[TMM_TEMP_HKLS_R20_V08x.rte]", "Sheet: 00 - Starting View" _ ... See more...
Hi! I have a log that looks more or less like this:   'H 16-Sep-2021 10:57:03.084; 0:< Jrn.Directive "WindowSize" _ , "[TMM_TEMP_HKLS_R20_V08x.rte]", "Sheet: 00 - Starting View" _ , 1176, 922 'H 16-Sep-2021 10:57:03.251; 0:< Jrn.Directive "ScreenResolution" _ , 324, 1200 'H 16-Sep-2021 10:57:03.251; 0:< Jrn.Directive "ProjToPage" _ , "[TMM_TEMP_HKLS_R20_V08x.rte]", "Sheet: 00 - Starting View" _ , 890.19441375881252 _ , 890.19441375881252, 0.00000000000000, 0.00000000000000 _ , 0.00000000000000, 890.19441375881252, 0.00000000000000 _ , 0.00000000000000, 0.00000000000000, 890.19441375881252 _ , 0.00000000000000, 0.00000000000000, 0.00000000000000 'H 16-Sep-2021 10:57:03.252; 0:<     I am looking for something that would help me to analyze it and find big time gaps between events. Something like a graph that would indicate how big gaps occurred over time. I just need something that would let me not look for those event by event or with notepad (logs tend to be big). I am completely new with Splunk, someone just let me know this is easily done with it. Thanks for any help.
Hi everyone, so I´m using CheckPoint Firewall Block app to block some ip's.  If I try to block them manually like this: I'm getting this: The IP is being blocked. However, when I'm confi... See more...
Hi everyone, so I´m using CheckPoint Firewall Block app to block some ip's.  If I try to block them manually like this: I'm getting this: The IP is being blocked. However, when I'm configuring an alert condition to block it automatically : I get this: so, the IP is not being blocked.   Someone had the same problem and know how to solve it?