All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you have multiple panels, you are probably going to have to use multiple tokens <html> <style> #single1 text { fill: $colour1$ !important; } </style> </html> | eval _colour=if(final_status ="O... See more...
If you have multiple panels, you are probably going to have to use multiple tokens <html> <style> #single1 text { fill: $colour1$ !important; } </style> </html> | eval _colour=if(final_status ="OK","Green","Red") | fields final_status _colour</query> <earliest>-15m</earliest> <latest>now</latest> <done> <set token="colour1">$result._colour$</set> </done> <html> <style> #single2 text { fill: $colour2$ !important; } </style> </html> | table status _colour</query> <earliest>@d</earliest> <latest>now</latest> <done> <set token="colour2">$result._colour$</set> </done>
Hi @vid1 , you have to configure three items in /etc/rsyslog.conf: in the MODULES section: module(load="imudp") # needs to be done just once or  module(load="imtcp") # needs to be done just once... See more...
Hi @vid1 , you have to configure three items in /etc/rsyslog.conf: in the MODULES section: module(load="imudp") # needs to be done just once or  module(load="imtcp") # needs to be done just once depending on the protocol you're using. then, in TEMPLATES  section: template(name="tmpl-paloalto" type="string" string="/var/log/remote/%fromhost%/paloalto/%HOSTNAME%/paloalto_%$YEAR%-%$MONTH%-%$DAY%_%$HOUR%.log") this string must be modified based on the path and the name of the files that must be written. At least the rule to implement: ruleset(name="writeRemoteData" queue.type="fixedArray" queue.size="250000" queue.dequeueBatchSize="4096" queue.workerThreads="4" queue.workerThreadMinimumMessages="60000") { # network - paloalto if $HOSTNAME == "10.10.10.10" then { action(type="omfile" ioBufferSize="64k" flushOnTXEnd="off" asyncWriting="on" dynafile="tmpl-paloalto" DirCreateMode="0770" FileCreateMode="0660" template="fmt_default") stop } this is the most important and difficoult part to implement, because you have to implement all your rules. Ciao. Giuseppe
1. This search is not proper SPL. The quotes don't add up so it's not obvious if you're quoting whole search or indeed have unneeded quotes in it. 2. Are you sure you're not forgetting about escapin... See more...
1. This search is not proper SPL. The quotes don't add up so it's not obvious if you're quoting whole search or indeed have unneeded quotes in it. 2. Are you sure you're not forgetting about escaping quotes in your string containing search? 3. On Splunk's side, back around 8.0 or even a bit after that the order of arguments with bin and timechart was important. You needed to put the "span=12h" as the first parameter immediately after the command. With sufficiently modern Splunk version it's more lenient to just placing the span parameter almost anywhere.
Yes, i need configuration rsyslog or syslog-ng on the Linux server
Hi @vid1 , are you speaking of output configuration on NAS or syslog input Configuration on SC4S? About NAS, I cannot help you, you should search in the NAS Management menu. About SC4S, I don't li... See more...
Hi @vid1 , are you speaking of output configuration on NAS or syslog input Configuration on SC4S? About NAS, I cannot help you, you should search in the NAS Management menu. About SC4S, I don't like it, I prefer to configure rsyslog (or syslog-ng) for receiving and then inputs on UF. Ciao. Giuseppe
Depends on what you mean by latency. If it's a pure network-level latency you mean then it's up to you to verify what latency you have between those environment. And no architecting can overcome that... See more...
Depends on what you mean by latency. If it's a pure network-level latency you mean then it's up to you to verify what latency you have between those environment. And no architecting can overcome that. But of course in terms of egress data, if you just set many different environments in different clouds as peers for a single SH(C), you'll get a lot of traffic since each time your search hits a centralized command it will have to send all results it has so far to the SH layer.
that add on as not working .we can logs collect from syslog server  but i don't know how to configure 
Hi @vid1 , check if the Dell PowerScale Add-On for Splunk (https://splunkbase.splunk.com/app/2689) is the correct one for you. Otherwise you have to create your own custom add-on. Ciao. Giuseppe
NAS (powerscale storage logs)  we  need syslog configuration in HF .how to config syslog in our hf
Hi @vid1 , what's your NAS technology? is there ad Add-On for it in apps.splunk.com? if yes, install it on the Forwarder and on the Search Head. Ciao. Giuseppe
Hi @gowthammahes , if you want to limit the time access for some users, you can apply a limit to the role of these users. Ciao. Giuseppe
Hello Everyone, I have a requirement that the data can be searchable upto last 30 days in search page. But the index retention period is 90 days. Basically it should allow the user to search only be... See more...
Hello Everyone, I have a requirement that the data can be searchable upto last 30 days in search page. But the index retention period is 90 days. Basically it should allow the user to search only between last 30 days events and if it is required then allow the user to search for 90 days.  Is there any configuration available to make the data searchable and not searchable in splunk. Thanks in advance
we need a NAS logs integration to splunk but i dont know how to integrate .We have SC4s container. can anyone help on this
Hi, if the issue exists on Windows then it sounds like the general Memory leak problem we have since Feb. this year and Splunk isn't realizing it, it seems.  See here: https://community.splunk.com/t5... See more...
Hi, if the issue exists on Windows then it sounds like the general Memory leak problem we have since Feb. this year and Splunk isn't realizing it, it seems.  See here: https://community.splunk.com/t5/Splunk-Enterprise/Memory-leak-in-Windows-Versions-of-Splunk-Enterprise/m-p/696849#M20010
(how do I give negative Karma?)  
Hello community, we are currently a bit desperate because of a Splunk memory leak problem under Windows OS that most probably all of you will have, but may not have noticed yet, here is the history a... See more...
Hello community, we are currently a bit desperate because of a Splunk memory leak problem under Windows OS that most probably all of you will have, but may not have noticed yet, here is the history and analysis of it: The first time we observed a heavy memory leak problem on a Windows Server 2019 instance was after updating to Splunk Enterprise Version 9.1.3 (from 9.0.7). The Windows server affected has installed some Splunk apps (Symantec, ServiceNow, MS o365, DBconnect, Solarwinds), which are starting a lot of python scripts at very short intervals. After the update the server crashes every few hours due to low memory. Openend a Splunk case #3416998 in Feb 9th. With the MS sysinternals tool rammap.exe we found a lot "zombie" processes (PIDs no more listed in task manager) which are still using some KB of memory (~20-32 KB). Process names are btool.exe, python3.exe, splunk-optimiz, splunkd.exe. It seems every time a process of one of these programs ends, it leaves behind such a memory usage. The Splunk apps on our Windows server do this very often and fast which results in thousands of zombie processes.   After this insight we downgraded Splunk on the server to 9.0.7 and the problem disappears. Then on a test server we installed Splunk Enterprise versions 9.1.3 and 9.0.9. Both versions are showing the same issue. New Splunk case #3428922. In March 28th we got this information from Splunk: .... got an update from our internal dev team on this "In Windows, after upgrading Splunk enterprise to 9.1.3 or 9.2.0 consumes more memory usage. (memory and processes are not released)" internal ticket. They investigated the diag files and seems system memory usage is high, but only Splunk running. This issue comes from the mimalloc (memory allocator). This memory issue will be fixed in the 9.1.5 and 9.2.2 .......... 9.2.2 arrived at July 1st: Unfortunately, still the same issue, the memory leak persists. 3rd Splunk case #3518811 (which is still open). Also not fixed in Version 9.3.0. Even after a online session showing them the rammap.exe screen they wanted us to provide diags again and again from our (test) servers - but they should actually be able to reproduce it in their lab. The hudge problem is: because of existing vulnerabilities in the installed (affected) versions we need to update Splunk (Heavy Forwarders) on our Windows Servers, but cannot due to the memory leak issue. How to reproduce: - OS tested: Windows Server 2016, 2019, 2022, Windows 10 22H2 - Splunk Enterprise Versions tested: 9.0.9, 9.1.3, 9.2.2 (Universal Forwarder not tested) - let the default installation run for some hours (splunk service running) - download rammap.exe from https://learn.microsoft.com/en-us/sysinternals/downloads/rammap and start it - goto Processes tab, sort by Process column - look for btool.exe, python3.exe and splunkd.exe with a small total memory usage of about ~ 20-32 KB. PIDs of this processes don't exists in task list (see Task manager or tasklist.exe) - with the Splunk default installation (without any other apps) the memory usage slowly increases because the default apps script running interval isn't very high - stopping Splunk service releases memory usage (and zombie processes disappear in rammap.exe) - for faster results you can add an app for exessive testing with python3.exe, starting it in short (0 seconds) intervals. The test.py doesn't need to be exist! Splunk starts python3.exe anyway. Only inputs.conf file is needed: ... \etc\apps\pythonDummy\local\inputs.conf [script://$SPLUNK_HOME/etc/apps/pythonDummy/bin/test.py 0000] python.version = python3 interval = 0 [script://$SPLUNK_HOME/etc/apps/pythonDummy/bin/test.py 1111] python.version = python3 interval = 0 ...............if you want, add some more stanzas, 2222, 3333 and so on ............. - the more python script stanzas there are, the more and faster the zombies processes appears in rammap.exe Please share your experiences. And open tickets for Splunk support if you also see the problem, please. We hope Splunk finally react.  
@ITWhisperer  : It works but whenever any panel in the dashboard is refreshed, color of all the panels in the dashboard is changed from Red/Green to white.  In my case , there are multiple panels. S... See more...
@ITWhisperer  : It works but whenever any panel in the dashboard is refreshed, color of all the panels in the dashboard is changed from Red/Green to white.  In my case , there are multiple panels. So , when any of the one panel is refreshed , it changes the color of all the 6 panels to white from Green/red.  Is it possible to keep the color always as Red or Green ???    Current code :  <row> <panel depends="$alwaysHide$"> <html> <style> #single1 text { fill: $colour$ !important; } </style> </html> </panel> </row> <row> <panel> <title>EVIS DASHBOARD</title> <single id="single1"> <search> <query>`macro_events_all_win_ops_esa` sourcetype=WinHostMon host=P9TWAEVV01STD (TERM(Esa_Invoice_Processor) OR TERM(Esa_Final_Demand_Processor) OR TERM(Esa_Initial_Listener_Service) OR TERM(Esa_MT535_Parser) OR TERM(Esa_MT540_Parser) OR TERM(Esa_MT542_Withdrawal_Request) OR TERM(Esa_MT544_Parser) OR TERM(Esa_MT546_Parser) OR TERM(Esa_MT548_Parser) OR TERM(Esa_SCM Batch_Execution) OR TERM(Euroclear_EVIS_Border_Internal) OR TERM(EVISExternalInterface)) | stats latest(State) as Current_Status by service | where Current_Status != "Running" | stats count as count_of_stopped_services | eval status = if(count_of_stopped_services = 0 , "OK" , "NOK" ) | fields status | append [ search `macro_events_all_win_ops_esa` host="P9TWAEVV01STD" sourcetype=WinEventLog "Batch *Failed" System_Exception="*" | stats count as count_of_failed_batches | eval status = if(count_of_failed_batches = 0 , "OK" , "NOK" ) | fields status ] | stats values(status) as status_list | eval final_status = if(mvcount(mvfilter(status_list=="NOK")) &gt; 0, "NOK", "OK") | eval _colour=if(final_status ="OK","Green","Red") | fields final_status</query> <earliest>-15m</earliest> <latest>now</latest> <done> <set token="colour">$result._colour$</set> </done> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">all</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel depends="$alwaysHide$"> <html> <style> #single2 text { fill: $colour$ !important; } </style> </html> <html> <style> #single3 text { fill: $colour$ !important; } </style> </html> </panel> </row> <row> <panel> <title>SEMT FAILURES DASHBOARD</title> <single id="single2"> <search> <query>(index="events_prod_gmh_gateway_esa") sourcetype="mq_PROD_GMH" Cr=S* (ID_FCT=SEMT_002 OR ID_FCT=SEMT_017 OR ID_FCT=SEMT_018 ) ID_FAMILLE!=T2S_ALLEGEMENT | eval ERROR_DESC= case(Cr == "S267", "T2S - Routing Code not related to the System Subscription." , Cr == "S254", "T2S - Transcodification of parties is incorrect." , Cr == "S255", "T2S - Transcodification of accounts are impossible.", Cr == "S288", "T2S - The Instructing party should be a payment bank.", Cr == "S299", "Structure du message incorrecte.",1=1,"NA") | stats count as COUNT_MSG | eval status = if(COUNT_MSG = 0 , "OK" , "NOK" ) | eval _colour=if(status ="OK","Green","Red") | table status</query> <earliest>@d</earliest> <latest>now</latest> <done> <set token="colour">$result._colour$</set> </done> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="colorBy">value</option> <option name="drilldown">all</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="useColors">1</option> </single> </panel>
Hello, I have a query used on Splunk enterprise web (search)- "index="__eit_ecio*" | ... | bin _time span=12h | ... | table ... | I am trying to put that into a python API code using Job clas... See more...
Hello, I have a query used on Splunk enterprise web (search)- "index="__eit_ecio*" | ... | bin _time span=12h | ... | table ... | I am trying to put that into a python API code using Job class as this - searchquery_oneshot ="<my above query>" I am getting error - "SyntaxError: invalid decimal literal" pointing to the 12h  in main query. How can I fix this? [2) Can I direct "collect" results (summary index) via this API into json format?] Thanks
so what you are saying,  just configure 'indexer clusters' in Each cloud environment and then use a 'SHC' from any of the cloud to search the 'indexer clusters' in ALL cloud environments? You sure it... See more...
so what you are saying,  just configure 'indexer clusters' in Each cloud environment and then use a 'SHC' from any of the cloud to search the 'indexer clusters' in ALL cloud environments? You sure it won't causes latency at time of SH aggregation? A diagram would be really appreciated
Try this query "My base query" ("Starting execution for request" OR "Successfully completed execution") | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:... See more...
Try this query "My base query" ("Starting execution for request" OR "Successfully completed execution") | rex "status:\s+(?<Status>.*)\"}" | rex field=_raw "\((?<Message_Id>[^\)]*)" | rex "Path\:\s+(?<ResourcePath>.*)\"" | rex "timestamp\\\":(\d+)" | stats min(timestamp) as startTime, max(timestamp) as endTime, values(*) as * by Message_Id | eval duration = endTime - startTime | eval end_timestamp_s = endTime/1000, start_timestamp_s = startTime/1000 | eval human_readable_etime = strftime(end_timestamp_s, "%Y-%m-%d %H:%M:%S"), human_readable_stime = strftime(start_timestamp_s, "%Y-%m-%d %H:%M:%S"), duration = tostring(duration, "duration") | table Message_Id human_readable_stime human_readable_etime duration Status Path