All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi as you can see,  I display 3 differents panels (one map viz and 2 chart viz) in a same row I have modified the standard width of these panels in CSS Now, i woould like to add in the same row 2 ... See more...
hi as you can see,  I display 3 differents panels (one map viz and 2 chart viz) in a same row I have modified the standard width of these panels in CSS Now, i woould like to add in the same row 2 other chart viz and to expand the height of the row could you help me please?     <form> <label>XXX</label> <fieldset submitButton="false"> <input type="time" token="tokTime" searchWhenChanged="true"> <label>Select Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel depends="$alwaysHideCSS$"> <html> <style> #chart{ width:20% !important; } #chart2{ width:20% !important; } #map{ width:60% !important; } </style> </html> </panel> <panel id="map"> <title>XXX</title> <map> <search> <query></query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="mapping.map.center">(46,2)</option> <option name="mapping.map.zoom">5</option> <option name="mapping.type">marker</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> </map> </panel> <panel id="chart"> <title>XXX</title> <chart> <search> <query></query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">-45</option> <option name="charting.axisTitleX.text">Bureaux</option> <option name="charting.axisTitleY.text">Nb utilisateurs</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"nbsam":#f70505}</option> <option name="charting.legend.placement">none</option> <option name="height">230</option> <option name="refresh.display">progressbar</option> </chart> </panel> <panel id="chart2"> <title>XXX</title> <chart> <search> <query></query> <earliest>$tokTime.earliest$</earliest> <latest>$tokTime.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.axisLabelsX.majorLabelStyle.rotation">-45</option> <option name="charting.axisTitleX.text">Bureaux</option> <option name="charting.axisTitleY.text">Nb utilisateurs</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.fieldColors">{"nbsam":#27B508}</option> <option name="charting.legend.placement">none</option> <option name="height">230</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> </form>        
I have a query that returns a set of hosts that have an event string. index=anIndex sourcetype=aSourceType ("aString1" AND ( host = "aHostName*")) |  stats values(host) AS aServerList1   I have a... See more...
I have a query that returns a set of hosts that have an event string. index=anIndex sourcetype=aSourceType ("aString1" AND ( host = "aHostName*")) |  stats values(host) AS aServerList1   I have a list of servers ("Server1", "Server2", "Server3")   <-  ServerList2   What im trying to do is to find servers/hosts that are not returned from the initial query. i.e. hosts that exists in ServerList2 but are not in ServerList1 ?
Hello, I have a tab with this field : <p>GET /url1/url2?code1=11&code2=12&code3=13 HTTP/1.1</p> I would like to split this field in 3 field code1, code2 and code3. I tried with this splunk comman... See more...
Hello, I have a tab with this field : <p>GET /url1/url2?code1=11&code2=12&code3=13 HTTP/1.1</p> I would like to split this field in 3 field code1, code2 and code3. I tried with this splunk command : <p>| rex field=message.jbosseap.access_log.http_request "codeRegate=(?<codeRegate>.*)""</p> but it is not good. How can I do this ? Thank you!
I have, sourcetype_A  (fields : ID, age, city, state) sourcetype_B  (fields : ID, job, salary, gender) The fields "ID" is common in both sourcetype_A and B but with a caveat. example1 : for ID ... See more...
I have, sourcetype_A  (fields : ID, age, city, state) sourcetype_B  (fields : ID, job, salary, gender) The fields "ID" is common in both sourcetype_A and B but with a caveat. example1 : for ID = 1687, it is present in sourcetype_A as 0001687 , in sourcetype_B as 1687 example2 : for ID = 9843, it is present in sourcetype_A as 009843 , in sourcetype_B as 9843 example3 : for ID = 8765, it is present in sourcetype_A as 08765 , in sourcetype_B as 8765 where 1687, 9843, 8765 are the actual IDs. zeros are creating mess in sourcetype_A . I am not allowed to use join, So this is what I am trying but I am not seeing all my data. =================================== (index=country) sourcetype=sourcetype_A OR sourcetype=sourcetype_B | eval ID = ltrim(ID,"0") | eventstats dc(sourcetype) as dc_st | where dc_st >1 | table ID, age, city, state,  job, salary, gender =================================== I also tried | stats values (age) as age                                  ........      ..........................................................                   by ID. But stats gave me massive multivalue fields with messy duplicates. I am asked to get in one row per data (no multivalues ) Any help ?
I want to build this type of dashboard  using internal data in splunk.but i couldn’t able interlink this structure using dashboard studio.please help in this thank you in advance veeru
Hi, In the following log, I wanted to extract Url, Method, ResponseTimeMs, StatusCode as a table: log: a_level="INFO", a_time="null", a_sub="xxx", a_uid="xx", a_tid="xx", a_rid="guid", a_thread="17... See more...
Hi, In the following log, I wanted to extract Url, Method, ResponseTimeMs, StatusCode as a table: log: a_level="INFO", a_time="null", a_sub="xxx", a_uid="xx", a_tid="xx", a_rid="guid", a_thread="175" a_type="type", a_met="Move", a_msg="Method=GET,Uri=http://monolith-xxx.abc.com/v2/clients?skip=0top=100,MediaType=null,RemoteIP=::ffff:10.10.10.10,XRemoteIP=null,ContentType=application/json,ContentLength=9702,ResponseTimeMs=54,StatusCode=200,ReasonPhrase=null,Referrer=null For URL, I wanted the full extract "http://monolith-xxx.abc-xyz/v2/clients?skip=0top=100"  My current splunk query is as below: index=aws_abc env=prd-01 uri Method StatusCode ResponseTimeMs | eval DataSet=log | rex field=DataSet "ResponseTimeMs=(?<ResponseTimeMs>\d+),StatusCode=(?<StatusCode>\d+)" | rex field=DataSet "Url=(?<uri>[^,]+),Method=(?<Method>\w+)" | table Url,Method,ResponseTimeMs, StatusCode I get value in the table for ResponseTimeMs, StatusCode but not for URL and Method. Please help. Thanks
L.s.,   At our company we deploy the Windows UF on all the vdi machine's. For security reason we make use of the security policy where powershell is prohibited to run. On a vdi machine with the co... See more...
L.s.,   At our company we deploy the Windows UF on all the vdi machine's. For security reason we make use of the security policy where powershell is prohibited to run. On a vdi machine with the command Get-ExecutionPolicy -List | Format-Table -AutoSize i get below: Scope ExecutionPolicy ----- --------------- MachinePolicy Undefined UserPolicy Undefined Process Undefined CurrentUser Undefined LocalMachine Restricted The last part is where the problem is.. it restricts the local system account from executing the scripts in the universal forwarder bin folder.  It gives an error for the splunk-powershell.ps1 script and then consumes a lot of cpu.   What can we do to use the scripts and use the secutiry policy? The policy is a demand in our company, so we can't dump that one.   Thanx in advance..
I have a server where logs are generated on daily basis in this format- /ABC/DEF/XYZ/xyz17012022.zip      /ABC/DEF/XYZ/xyz16012022.zip            /ABC/DEF/XYZ/xyz15012022.zip OR  /ABC/DEF/RST/rst1... See more...
I have a server where logs are generated on daily basis in this format- /ABC/DEF/XYZ/xyz17012022.zip      /ABC/DEF/XYZ/xyz16012022.zip            /ABC/DEF/XYZ/xyz15012022.zip OR  /ABC/DEF/RST/rst17012022.gz      /ABC/DEF/RST/rst16012022.gz               /ABC/DEF/RST/rst15012022.gz   I am getting this error , every time when i am indexing the .gz, .tar or .zip  file - "updated less than 10000ms ago, will not read it until it stops changing ; has stopped changing , will read it now." This problem was earlier addressed in this post,  https://community.splunk.com/t5/Developing-for-Splunk-Enterprise/gz-file-not-getting-indexed-in-splu... As suggested I have used " crcSalt = <SOURCE> " but I am still facing similar errors.   inputs.conf:  [monitor:///ABC/DEF/XYZ/xyz*.zip] index= log_critical disabled = false sourcetype= Critical_XYZ ignoreOlderThan = 2d crcSalt = <SOURCE> I am getting this Event in Internal Logs while ingesting the log file    
Hi, We install Splunk_TA_nix and enabled both cpu.sh and cpu_metrics.sh to capture cpu related logs. Do we have SPL query we can use to calculate the CPU Utilization. I do not have indepth Linux bac... See more...
Hi, We install Splunk_TA_nix and enabled both cpu.sh and cpu_metrics.sh to capture cpu related logs. Do we have SPL query we can use to calculate the CPU Utilization. I do not have indepth Linux background so I am not sure which fields should be use to calculate the percentage of  CPU Utilization. If you can share the formula or fields I need to use from Splunk_TA_nix , I would appreciate it. Our aim is to check the historical  CPU Utilization of our Splunk Heavy Forwarder. Thanks
Hello,everyone! At first, sorry for my bad English. I have a problem to join two result. The raw data is a reg file, like this:     Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYS... See more...
Hello,everyone! At first, sorry for my bad English. I have a problem to join two result. The raw data is a reg file, like this:     Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services] [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\XboxNetApiSvc] "DisplayName"="@%systemroot%\\system32\\XboxNetApiSvc.dll,-100" "ErrorControl"=dword:00000001 "ImagePath"=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,\ 74,00,25,00,5c,00,73,00,79,00,73,00,74,00,65,00,6d,00,33,00,32,00,5c,00,73,\ 00,76,00,63,00,68,00,6f,00,73,00,74,00,2e,00,65,00,78,00,65,00,20,00,2d,00,\ 6b,00,20,00,6e,00,65,00,74,00,73,00,76,00,63,00,73,00,00,00 "Start"=dword:00000003 "Type"=dword:00000020 "Description"="@%systemroot%\\system32\\XboxNetApiSvc.dll,-101" "DependOnService"=hex(7):42,00,46,00,45,00,00,00,6d,00,70,00,73,00,73,00,76,00,\ 63,00,00,00,00,00 "ObjectName"="LocalSystem" "ServiceSidType"=dword:00000001 "RequiredPrivileges"=hex(7):53,00,65,00,54,00,63,00,62,00,50,00,72,00,69,00,76,\ 00,69,00,6c,00,65,00,67,00,65,00,00,00,53,00,65,00,49,00,6d,00,70,00,65,00,\ 72,00,73,00,6f,00,6e,00,61,00,74,00,65,00,50,00,72,00,69,00,76,00,69,00,6c,\ 00,65,00,67,00,65,00,00,00,00,00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\XboxNetApiSvc\Parameters] "ServiceDll"="%SystemRoot%\system32\XboxNetApiSvc.dll" "ServiceDllUnloadOnStop"=dword:00000001 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\xboxgip] "ImagePath"=hex(2):5c,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,\ 74,00,5c,00,53,00,79,00,73,00,74,00,65,00,6d,00,33,00,32,00,5c,00,64,00,72,\ 00,69,00,76,00,65,00,72,00,73,00,5c,00,78,00,62,00,6f,00,78,00,67,00,69,00,\ 70,00,2e,00,73,00,79,00,73,00,00,00 "Type"=dword:00000001 "Start"=dword:00000003 "ErrorControl"=dword:00000001 "Group"="NDIS" "Tag"=dword:00000001 "DisplayName"="@xboxgip.inf,%XBOXGIP_Desc%;Xbox Game Input Protocol Driver" "Description"="@xboxgip.inf,%XBOXGIP_Desc%;Xbox Game Input Protocol Driver" "Owners"=hex(7):78,00,62,00,6f,00,78,00,67,00,69,00,70,00,2e,00,69,00,6e,00,66,\ 00,00,00,00,00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\xboxgip\Linkage] "Export"=hex(7):5c,00,44,00,65,00,76,00,69,00,63,00,65,00,5c,00,78,00,62,00,6f,\ 00,78,00,67,00,69,00,70,00,00,00,00,00 "Bind"=hex(7):00,00 "Route"=hex(7):00,00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\xboxgip\Parameters] [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\XblGameSave] "DisplayName"="@%systemroot%\\system32\\XblGameSave.dll,-100" "ErrorControl"=dword:00000001 "ImagePath"=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,\ 74,00,25,00,5c,00,73,00,79,00,73,00,74,00,65,00,6d,00,33,00,32,00,5c,00,73,\ 00,76,00,63,00,68,00,6f,00,73,00,74,00,2e,00,65,00,78,00,65,00,20,00,2d,00,\ 6b,00,20,00,6e,00,65,00,74,00,73,00,76,00,63,00,73,00,00,00 "Start"=dword:00000003 "Type"=dword:00000020 "Description"="@%systemroot%\\system32\\XblGameSave.dll,-101" "DependOnService"=hex(7):55,00,73,00,65,00,72,00,4d,00,61,00,6e,00,61,00,67,00,\ 65,00,72,00,00,00,58,00,62,00,6c,00,41,00,75,00,74,00,68,00,4d,00,61,00,6e,\ 00,61,00,67,00,65,00,72,00,00,00,00,00 "ObjectName"="LocalSystem" "FailureActions"=hex:80,51,01,00,00,00,00,00,00,00,00,00,04,00,00,00,14,00,00,\ 00,01,00,00,00,10,27,00,00,01,00,00,00,10,27,00,00,01,00,00,00,10,27,00,00,\ 00,00,00,00,00,00,00,00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\XblGameSave\Parameters] "ServiceDll"="%SystemRoot%\System32\XblGameSave.dll" "ServiceDllUnloadOnStop"=dword:00000001 "ServiceIdleTimeout"=dword:0000003c [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Wof] "SupportedFeatures"=dword:00000003 "DisplayName"="Windows Overlay File System Filter Driver" "ErrorControl"=dword:00000001 "Group"="FSFilter Compression" "Start"=dword:00000000 "Type"=dword:00000002 "DependOnService"=hex(7):46,00,6c,00,74,00,4d,00,67,00,72,00,00,00,00,00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Wof\Instances] "DefaultInstance"="Wof Instance" [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Wof\Instances\Wof Instance] "Altitude"="40700" "Flags"=dword:00000000 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Wof\Parameters] [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\workerdd] [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\workerdd\Device0] "InstalledDisplayDrivers"=hex(7):57,00,4f,00,52,00,4b,00,45,00,52,00,44,00,44,\ 00,00,00,00,00 "VgaCompatible"=dword:00000000 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\workfolderssvc] "DisplayName"="@%systemroot%\\system32\\workfolderssvc.dll,-102" "ErrorControl"=dword:00000001 "Group"="LocalService" "ImagePath"=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,\ 74,00,25,00,5c,00,53,00,79,00,73,00,74,00,65,00,6d,00,33,00,32,00,5c,00,73,\ 00,76,00,63,00,68,00,6f,00,73,00,74,00,2e,00,65,00,78,00,65,00,20,00,2d,00,\ 6b,00,20,00,4c,00,6f,00,63,00,61,00,6c,00,53,00,65,00,72,00,76,00,69,00,63,\ 00,65,00,00,00 "Start"=dword:00000003 "Type"=dword:00000020 "Description"="@%systemroot%\\system32\\workfolderssvc.dll,-101" "DependOnService"=hex(7):52,00,70,00,63,00,53,00,73,00,00,00,77,00,73,00,65,00,\ 61,00,72,00,63,00,68,00,00,00,00,00 "ObjectName"="NT AUTHORITY\\LocalService" "ServiceSidType"=dword:00000001 "RequiredPrivileges"=hex(7):53,00,65,00,49,00,6d,00,70,00,65,00,72,00,73,00,6f,\ 00,6e,00,61,00,74,00,65,00,50,00,72,00,69,00,76,00,69,00,6c,00,65,00,67,00,\ 65,00,00,00,00,00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\wpcfltr] "DisplayName"="Family Safety Filter Driver" "ErrorControl"=dword:00000001 "Group"="NDIS" "ImagePath"=hex(2):73,00,79,00,73,00,74,00,65,00,6d,00,33,00,32,00,5c,00,44,00,\ 52,00,49,00,56,00,45,00,52,00,53,00,5c,00,77,00,70,00,63,00,66,00,6c,00,74,\ 00,72,00,2e,00,73,00,79,00,73,00,00,00 "Start"=dword:00000003 "Type"=dword:00000001 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\wpcfltr\Security] "Security"=hex:01,00,14,80,8c,00,00,00,98,00,00,00,14,00,00,00,30,00,00,00,02,\ 00,1c,00,01,00,00,00,02,80,14,00,ff,01,0f,00,01,01,00,00,00,00,00,01,00,00,\ 00,00,02,00,5c,00,04,00,00,00,00,00,14,00,fd,01,02,00,01,01,00,00,00,00,00,\ 05,12,00,00,00,00,00,18,00,ff,01,0f,00,01,02,00,00,00,00,00,05,20,00,00,00,\ 20,02,00,00,00,00,14,00,9d,01,02,00,01,01,00,00,00,00,00,05,04,00,00,00,00,\ 00,14,00,8d,01,02,00,01,01,00,00,00,00,00,05,06,00,00,00,01,01,00,00,00,00,\ 00,05,12,00,00,00,01,01,00,00,00,00,00,05,12,00,00,00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WPDBusEnum] "Start"=dword:00000003 "DisplayName"="@%SystemRoot%\\system32\\wpdbusenum.dll,-100" "ErrorControl"=dword:00000001 "ImagePath"=hex(2):25,00,53,00,79,00,73,00,74,00,65,00,6d,00,52,00,6f,00,6f,00,\ 74,00,25,00,5c,00,73,00,79,00,73,00,74,00,65,00,6d,00,33,00,32,00,5c,00,73,\ 00,76,00,63,00,68,00,6f,00,73,00,74,00,2e,00,65,00,78,00,65,00,20,00,2d,00,\ 6b,00,20,00,4c,00,6f,00,63,00,61,00,6c,00,53,00,79,00,73,00,74,00,65,00,6d,\ 00,4e,00,65,00,74,00,77,00,6f,00,72,00,6b,00,52,00,65,00,73,00,74,00,72,00,\ 69,00,63,00,74,00,65,00,64,00,00,00 "Type"=dword:00000020 "Description"="@%SystemRoot%\\system32\\wpdbusenum.dll,-101" "DependOnService"=hex(7):52,00,70,00,63,00,53,00,73,00,00,00,00,00 "ObjectName"="LocalSystem" "ServiceSidType"=dword:00000001 "RequiredPrivileges"=hex(7):53,00,65,00,41,00,75,00,64,00,69,00,74,00,50,00,72,\ 00,69,00,76,00,69,00,6c,00,65,00,67,00,65,00,00,00,53,00,65,00,43,00,68,00,\ 61,00,6e,00,67,00,65,00,4e,00,6f,00,74,00,69,00,66,00,79,00,50,00,72,00,69,\ 00,76,00,69,00,6c,00,65,00,67,00,65,00,00,00,53,00,65,00,43,00,72,00,65,00,\ 61,00,74,00,65,00,47,00,6c,00,6f,00,62,00,61,00,6c,00,50,00,72,00,69,00,76,\ 00,69,00,6c,00,65,00,67,00,65,00,00,00,53,00,65,00,43,00,72,00,65,00,61,00,\ 74,00,65,00,50,00,65,00,72,00,6d,00,61,00,6e,00,65,00,6e,00,74,00,50,00,72,\ 00,69,00,76,00,69,00,6c,00,65,00,67,00,65,00,00,00,53,00,65,00,49,00,6d,00,\ 70,00,65,00,72,00,73,00,6f,00,6e,00,61,00,74,00,65,00,50,00,72,00,69,00,76,\ 00,69,00,6c,00,65,00,67,00,65,00,00,00,00,00 "FailureActions"=hex:80,51,01,00,00,00,00,00,00,00,00,00,03,00,00,00,14,00,00,\ 00,01,00,00,00,c0,d4,01,00,01,00,00,00,e0,93,04,00,00,00,00,00,00,00,00,00 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WPDBusEnum\BthActiveConnect] "ACInterval"=dword:00000078 "DCInterval"=dword:000000f0 [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WPDBusEnum\Parameters] "ServiceDllUnloadOnStop"=dword:00000001     You can save it to .reg file and import to splunk. The first search result is : The second search result is : And my problem is how to join this two search when SrvName=SrvName2,the final result should be like below: How to solve this problem with splunk? Thank you,my friends!!  
Hi all, I have a token "Duration", and the values which will be passed to the drilldown are duration<15, 15<duration<=25 and duration>25. How can i pass the value 15-25 as a token?
I am trying to write a query to calculate the amount of bytes  received and sent per day from one of our firewalls at our site to a firewall at another site. This is to create a series of daily metri... See more...
I am trying to write a query to calculate the amount of bytes  received and sent per day from one of our firewalls at our site to a firewall at another site. This is to create a series of daily metrics for management. I've come up with a query that succeeds most of the time. Current query:   index=syslogindex device=firewall vpn=site1-to-site2 | bin span=1d _time | stats range(rcvdbyte) as rcvdbyte range(sentbyte) as sentbyte by _time   However, this query fails on days when the vpn tunnel is reset.  The rcvdbyte and sentbyte fields that come from the firewall are summed values from the moment the VPN tunnel is started. When the tunnel is reset, it creates a new tunnelid and resets the rcvdbyte, sentbyte, and duration counts to zero. And the current query I am using calculates a massive spike for those days since the range of the rcvdbyte field is now zero minus whatever the previous summed amount of the rcvdbyte field was. There are a few ways I can think of changing the query to account for when the tunnel is reset. One of my ideas is to track tunnelid over time while still calculating daily rcvdbyte and sentbyte ranges. Another is to somehow track when rcvdbyte or sentbyte or even duration get reset to zero and do a different calculation for that day. Another solution is to just disregard the days when it is reset. However, I haven't been able to implement any of the solutions I have thought of. Does anyone have any different ideas or know how I can implement one of my ideas? An example event:   date=2021-06-01 time=23:50:43 device=firewall serialid=1234567891 loggingid=123456789 type=event subtype=vpn loggingdesc="tunnel statistics" loggingmsg="tunnel statistics" action=tunnel-stats remoteip=192.168.1.2 localip=192.168.2.2 remoteport=60000 localport=60000 vpn="site1-to-site2" tunnelid=1234567891 tunneltype="vpn" duration=10170 sentbyte=120 rcvdbyte=360    
Getting numerous such errors on the Indexer Clustering: Service Activity page   Failed to trigger replication (err='Cannot replicate remote storage enabled warm bucket, bid=index_name~680~3587E22... See more...
Getting numerous such errors on the Indexer Clustering: Service Activity page   Failed to trigger replication (err='Cannot replicate remote storage enabled warm bucket, bid=index_name~680~3587E22A-71BF-4194-XXX-C1115EECE until it's uploaded')    Is this normal ? if not, please suggest how to fix this. Thanks.
Hello everyone, I have a simple question. From some of the training I took, I was told that "Volume used today" resets at midnight? Is this true or is this false?
Hello, I was wondering if it is possible to use Splunk to query IIS logs for a monthly application hit count for multiple web applications on the same domain? The report I need to submit would look... See more...
Hello, I was wondering if it is possible to use Splunk to query IIS logs for a monthly application hit count for multiple web applications on the same domain? The report I need to submit would look something like: http://domain/webapp1/   -  ## total monthly hits http://domain/webapp2/   -  ## total monthly hits .... I just need the overall total monthly hit count and not the total unique IP address hit count. Any help would be much appreciated.  Thank you!
I searched around and trying to pin down options on sending Universal Forwarder logs to Splunk Cloud. Diagrams, likes and experiences deeply app preciated.
Hello!  I need help created a custom triggered alert condition where when I run the search below it will send me an alert when there is a new version created compared to the versions that were liste... See more...
Hello!  I need help created a custom triggered alert condition where when I run the search below it will send me an alert when there is a new version created compared to the versions that were listed yesterday. So the trigger alert would run once each day and if I had 1.1.1 and 1.1.2 the day before but yesterday I had it give me results with 1.1.1, 1.1.2, 1.1.3 then to send me an alert when that new version is detected. How would I go about setting up that custom alert?   | inputlookup program_version.csv | where date>=relative_time(now(), "-30d@d") | eval _time=date | timechart max(count) by version
To preface my question, I've gone over docs and multiple other questions trying to find a definitive solution, but am still running into a wall. I read through the props.conf documentation, the timez... See more...
To preface my question, I've gone over docs and multiple other questions trying to find a definitive solution, but am still running into a wall. I read through the props.conf documentation, the timezone documentation, and multiple other posts. The answer may be in front of me, but if so I'm missing it and I apologize in advance. My issue: I have a bunch of devices generating syslog events that are being sent straight to Splunk with no in-between. Cisco switches and routers, Palo Alto firewalls, NTP servers, environmental sensors, and RHEL hosts. All using index:syslog and sourcetype:syslog. While I recognize this is far from ideal, it is the environment I was handed when made the Splunk admin, and I'm trying to work through it. For the most part this works; with enough field-value pair tags, field extractions, and detailed search filters I'm getting the info I need from the hosts. The problem is that a few (12) of our hosts are using GMT as their timezone, while everything else is using the local time (CST) - this is something that cannot be changed, they must use GMT time. Also, the timezone is not identified within the text of the event. It's just a timestamp. Because of this, we're getting events from those hosts that, to Splunk, are occurring six hours in the future, findable only by using (earliest=+1h latest=+7h) in our searches. This isn't viable when trying to look at events from multiple hosts in conjunction. My fix was to try and add a timezone designation within props.conf, using a regex to identify the hosts affected in a single stanza. I put the regex together and verified it works by running a search using it, which pulled only the hosts I wanted. So, in Splunk/etc/system/local/props.conf I added the stanza: [host::(doma0wkst*|domsrv(10|11)|192.168.10(12|14|16|18))] TZ = UTC To identify the effected hosts (all hosts that started with "doma0wkst", domsrc10 & 11, and 192.168.10.12, .14, .16, .18) and tell Splunk they were reporting UTC time. My understanding was that Splunk would take this and automatically convert the event times to local so that they would align with all the other events we receive. But, this is not working. After adding that and restarting the Splunk service, I'm still getting events from the future. My second thought was to add multiple stanzas, one per host; if that is the best solution, that is what I will do. But I figured I would ask in here to see if there were a better solution first.
Hey all, Newbie here learning Splunk. I'm starting to get into dashboards and want to create either a pie chart or just a simple count of how many times a certain string occurs in a log file. | sta... See more...
Hey all, Newbie here learning Splunk. I'm starting to get into dashboards and want to create either a pie chart or just a simple count of how many times a certain string occurs in a log file. | stats count("no phase found for entry") count("no work order found") This returns two columns but they both have 0 in them. But if I just search for each string individually or with an OR statement, it returns all entries (which is around 118 combined). I've been reading through the Splunk Documentation on stats but can't seem to find an answer on how to combine two counts of anything. Any help is appreciated!
I've been trying to resolve this since October and not getting traction.  Turning to the community for help: I have seemingly contradictory information within the same log line makes me question- do... See more...
I've been trying to resolve this since October and not getting traction.  Turning to the community for help: I have seemingly contradictory information within the same log line makes me question- do we have an issue or not?   On the one hand, i think i do because the history command shows the search is cancelled... and I trust this information.  However, there are artifacts in the logs that make me question if the search is fully running (which appears to be true since "fully_completed_search=TRUE"... so I am now confused if we have a problem or not.) Why do searches show fully_completed_search=TRUE and has_error_warn=FALSE when the info field (and history command) show "cancelled" and have a tag of "error"   BOTTOM LINE QUESTION: Are my searches are running correctly and returning all results or not?    Sample _audit log search activity that I found - not sure if this gives any usable insight Audit:[timestamp=10-01-2021 16:31:40.338, user=redacted_user, action=search, info=canceled, search_id='1633105804.108286', has_error_warn=false, fully_completed_search=true, total_run_time=18.13, event_count=0, result_count=0, available_count=0, scan_count=133645, drop_count=0, exec_time=1633105804, api_et=1633104900.000000000, api_lt=1633105800.000000000, api_index_et=N/A, api_index_lt=N/A, search_et=1633104900.000000000, search_lt=1633105800.000000000, is_realtime=0, savedsearch_name="", search_startup_time="1270", is_prjob=false, acceleration_id="98DCBC55-D36C-4671-93CD-1A950D796EC4_search_redacted_user_311d202b50b71a64", app="search", provenance="N/A", mode="historical_batch", workload_pool=standard_perf, is_proxied=false, searched_buckets=53, eliminated_buckets=0, considered_events=133645, total_slices=331408, decompressed_slices=11305, duration.command.search.index=120, invocations.command.search.index.bucketcache.hit=53, duration.command.search.index.bucketcache.hit=0, invocations.command.search.index.bucketcache.miss=0, duration.command.search.index.bucketcache.miss=0, invocations.command.search.index.bucketcache.error=0, duration.command.search.rawdata=2533, invocations.command.search.rawdata.bucketcache.hit=0, duration.command.search.rawdata.bucketcache.hit=0, invocations.command.search.rawdata.bucketcache.miss=0, duration.command.search.rawdata.bucketcache.miss=0, invocations.command.search.rawdata.bucketcache.error=0, roles='redacted', search='search index=oswinsec (EventID=7036 OR EventID=50 OR EventID=56 OR EventID=1000 OR EventID=1001) | eval my_ts2 = _time*1000 | eval indextime=_indextime |table my_ts2,EventID | rename EventID as EventCode']