All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

On a Dashboard, I have a Pie Chart that Status Codes. If I scroll over each code, Splunk shows me the percentage of the total for that code, but it is not included when I export the Pie Chart to CSV ... See more...
On a Dashboard, I have a Pie Chart that Status Codes. If I scroll over each code, Splunk shows me the percentage of the total for that code, but it is not included when I export the Pie Chart to CSV (Only the codes and the count are included). Is it possible to export the percentage as well? Thanks
In Splunk enterprise when running the following log4j scanner it is picking up that the following files as vulnerable. Can somebody please provide steps on how I can remediate this? Is it a case of ... See more...
In Splunk enterprise when running the following log4j scanner it is picking up that the following files as vulnerable. Can somebody please provide steps on how I can remediate this? Is it a case of upgrading all splunk servers with the latest version from https://logback.qos.ch/download.html. If not please advise steps and will it require me to reboot all related splunk servers please?   log4j/logback scanner https://github.com/logpresso/CVE-2021-44228-Scanner   Files found as being vulnerable C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\jars\command.jar Logback 1.2.3 CVE-2021-42550 C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\jars\dbxquery.jar Logback 1.2.3 CVE-2021-42550 C:\Program Files\Splunk\etc\apps\splunk_app_db_connect\jars\server.jar Logback 1.2.3 CVE-2021-42550   many thanks
that's xml  <SearchCustomer> <Transaction Name=\"Naviline\" Time=\"02/11/1982 01:25:07:223\" Duration=\"9.034\" /> <Transaction Name=\"SePipeline\" Time=\"02/11/1982 01:25:07:899\" Duration=\"0.66... See more...
that's xml  <SearchCustomer> <Transaction Name=\"Naviline\" Time=\"02/11/1982 01:25:07:223\" Duration=\"9.034\" /> <Transaction Name=\"SePipeline\" Time=\"02/11/1982 01:25:07:899\" Duration=\"0.662\" /> <Transaction Name=\"NdwIncuse\" Time=\"02/11/1982 01:25:09:553\" Duration=\"1.614\" /> <Transaction Name=\"EnterDetails\" Time=\"02/11/1982 01:25:11:532\" Duration=\"1.916\" /> <Transaction Name=\"SIline\" Time=\"02/11/1982 01:25:12:703\" Duration=\"1.132\" /> <Transaction Name=\"GetWindowIn\" Time=\"02/11/1982 01:25:20:748\" Duration=\"7.957\" /> <Transaction Name=\"PrimaryAddress\" Time=\"02/11/1982 01:25:22:154\" Duration=\"1.375\" /> <Transaction Name=\"WindowingTouch\" Time=\"02/11/1982 01:25:51:674\" Duration=\"1.365\" /> <Transaction Name=\"dailysearch\" Time=\"02/11/1982 01:26:01:908\" Duration=\"10.141\" /> <Transaction Name=\"SearchInServicing\" Time=\"02/11/1982 01:26:03:115\" Duration=\"1.149\" /> <Transaction Name=\"NavExistingAddresses\" Time=\"02/11/1982 01:26:05:060\" Duration=\"1.885\" /> </PerformanceReport> </SearchCustomer>"   i'm looking for dashboard like below  
Does anyone know of an add-on or other script that would allow one to analyze network traffic to detect beaconing using a Fourier transform (FFT)?
Hello, We have a bunch of files store (or uploaded) in our clients' SharePoint sites.  How I would ingest them into the SPLUNK. Any help will be highly appreciated. Thank you.  
hi   index=toto sourcetype=tutu type=* | fields host _time runq type | join host [ search index=toto sourcetype=tutu type=* | fields host core | stats max(core) as nbcore by host ... See more...
hi   index=toto sourcetype=tutu type=* | fields host _time runq type | join host [ search index=toto sourcetype=tutu type=* | fields host core | stats max(core) as nbcore by host ] | eval Vel = (runq / nbcore) | eval _time = strftime(_time, "%d-%m-%y %H:%M:%S") | sort - _time | rename host as Host, _time as Heure | table Heure Host Vel I use the search below For one host, an event is indexed every 40 seconds Now I need to group these events in a span of 30m So I have added a bin span like this | bin _time span=30m  So for one host there is many events with the same span value Now what I need it's just to the last event indexed for the host in the span So I need to display something like this : "host" "time span" "last event generated" I think it's not very difficult but I have a bug Could you help please?
Is it possible to put time modifiers like "earliest" into a search and essentially disregard the time range drop-down in the Splunk UI? I have data that is logged once every 24 hours, so I'd like to ... See more...
Is it possible to put time modifiers like "earliest" into a search and essentially disregard the time range drop-down in the Splunk UI? I have data that is logged once every 24 hours, so I'd like to embed "WHERE earliest=-24h" into a rather large, complicated query so I can cut-and-paste from my notes without having to mess around with the drop-down (or more importantly, so I don't need to make additional notes to remind myself to set the drop-down). I tried something like this: index=iis sourcetype=xxxx host=xxxx | WHERE earliest=-24h | eval... | table...  But the UI shows "Error in 'where' command: the operator at 'h' is invalid.
Hi There: I'm trying to return the list of access_users with 0 web hits from the web_hits table.  How can i adjust this query to return the list of users with no hits from the web_hits table? Than... See more...
Hi There: I'm trying to return the list of access_users with 0 web hits from the web_hits table.  How can i adjust this query to return the list of users with no hits from the web_hits table? Thanks in advance!   | inputlookup web_hits.csv | lookup local=t access_users.csv user OUTPUT user as access_user | search access_user="*" | stats count as num_webhits by access_user
Dear Team Could you please send me the controller sizing calculator as I need to install an on-premise controller. I have 40 applications for both java and .net applications and the number of node... See more...
Dear Team Could you please send me the controller sizing calculator as I need to install an on-premise controller. I have 40 applications for both java and .net applications and the number of nodes is 30. Could you please help me with the calculator link for controller sizing? Thanks Kamal
Hi All,  I have a dropdown box, few text boxes and submit button on my dashboard. I need to choose one value from the dropdown and enter the text boxes values  and finally click on the SUBMIT butto... See more...
Hi All,  I have a dropdown box, few text boxes and submit button on my dashboard. I need to choose one value from the dropdown and enter the text boxes values  and finally click on the SUBMIT button.   Use case : I have to pass all the above mentioned inputs to call my python script, on clicking the SUBMIT button on my dashboard. (attached the dashboard image for reference) What is the way to achieve this use case in Splunk, Any help on this is appreciated.  Thanks!
Hello, I am wondering when my index will roll from warm to cold with the settings below, the rest of the settings is default: [XXXX] frozenTimePeriodInSecs = 46656000 maxDataSize = auto_high_volu... See more...
Hello, I am wondering when my index will roll from warm to cold with the settings below, the rest of the settings is default: [XXXX] frozenTimePeriodInSecs = 46656000 maxDataSize = auto_high_volume maxTotalDataSizeMB = 4294967295 maxWarmDBCount = 4294967295 repFactor = auto Will the default setting maxHotSpanSecs = 7776000 roll the buckets from warm to cold or is it just the span within the bucket? When will the buckets roll from warm to cold in this case? Thanks in advance!
https://www.appdynamics.com/partners/technology-partners/google-cloud-platform Hi from the link above it is not clear to me if this is a general information or i can really monitor cloud native appl... See more...
https://www.appdynamics.com/partners/technology-partners/google-cloud-platform Hi from the link above it is not clear to me if this is a general information or i can really monitor cloud native applications especially the end-user monitoring. can you please refer to how to configure this as well as the documentation in relation if any. Good day
Hi    Am trying to collect the windows logs from DCs and send them to both Splunk indexer and Third party System (Snare Central). I managed to send the logs using syslog configuration. But some how... See more...
Hi    Am trying to collect the windows logs from DCs and send them to both Splunk indexer and Third party System (Snare Central). I managed to send the logs using syslog configuration. But some how the logs are getting broken. I want my log format to be in "snare over syslog". Please suggest. UF => HF => Snare Central  
Generally indexer is used to store indexes but in the standalone architecture how data is stored ???  
Is it currently possible to do multiclass classification with any of the algorithms in the MLTK? I have investigated the RandomForestClassifier algorithm, which has multiclass functionality, but loo... See more...
Is it currently possible to do multiclass classification with any of the algorithms in the MLTK? I have investigated the RandomForestClassifier algorithm, which has multiclass functionality, but looking at the parameters available in the Splunk MLTK documentation (RandomForestClassifier ) I do not see any of the multiclass parameters being available (specifically n_classes_) - See also sklearn - RandomForestClassifier   
Hi Team, In tiers and nodes of APPD I found JVM heap,MAX heap,JVM cpu burnt, GC Time spent are showing zero. when i checked App and machine agent status are showing up and running. i found below err... See more...
Hi Team, In tiers and nodes of APPD I found JVM heap,MAX heap,JVM cpu burnt, GC Time spent are showing zero. when i checked App and machine agent status are showing up and running. i found below errors in app agent logs. can some one suggest me how to resolve this issue. [AD Thread-Metric Reporter1] 07 Jan 2022 03:12:27,211 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global0] 07 Jan 2022 03:12:52,221 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global1] 07 Jan 2022 03:12:52,221 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global0] 07 Jan 2022 03:13:22,227 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global0] 07 Jan 2022 03:13:22,227 ERROR AgentKernel - Error executing task - [AD Thread-Metric Reporter0] 07 Jan 2022 03:13:27,212 ERROR JVMMetricReporter - Error getting thread count [AD Thread-Metric Reporter0] 07 Jan 2022 03:13:27,212 WARN JVMMetricReporter - Error updating JVM JMX values [AD Thread-Metric Reporter0] 07 Jan 2022 03:13:27,212 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global1] 07 Jan 2022 03:13:52,223 ERROR AgentKernel - Error executing task - [AD Thread Pool-Global0] 07 Jan 2022 03:13:52,223 ERROR AgentKernel - Error executing task - Regards Charan
Dear Splunk Community, Every 5 minutes the following event is generated : 2022-01-05 21:20:33 : Running OR 2022-01-05 20:19:33 : Failed I would like to display a timeline with two (2) lines show... See more...
Dear Splunk Community, Every 5 minutes the following event is generated : 2022-01-05 21:20:33 : Running OR 2022-01-05 20:19:33 : Failed I would like to display a timeline with two (2) lines showing when the system is running and when it fails. I have come so far:   running OR failed | eval status = if(like(_raw, "%Running%"), "Running", "Not running") | table status     I am in need of some guidance in this matter. How do I change the above search so that I have a line chart visualization with the two lines in it? Thanks in advance.    
Hello all,   I am trying to extract an field from the below event and using the below add extraction, however this extraction is failing to extract the complete event and is breaking in the middle.... See more...
Hello all,   I am trying to extract an field from the below event and using the below add extraction, however this extraction is failing to extract the complete event and is breaking in the middle. Can you please help resolve.   212685,00004107,00000000,2404,"20220106111738","20220106111739",4,-1,-1,"SYSTEM","","psd240",327312673,"MS932","Server ジョブ(Server:/情報提供基盤/EXA-X6系/汎用集計・フロア取込系/ユニット別フロアマスタテンポラリネット/ユニット別フロアマスタテンポラリ作成:@52X6013)が異常終了しました(status: a, code: 100, host: PSC642, JOBID: 281767)","Error","jp1admin","/HITACHI/JP1/AJS2","JOB","AJSROOT1:/情報提供基盤/EXA-X6系/汎用集計・フロア取込系/ユニット別フロアマスタテンポラリネット/ユニット別フロアマスタテンポラリ作成","JOBNET","Server:/情報提供基盤/EXA-X6系/汎用集計・フロア取込系/ユニット別フロアマスタテンポラリネット","Server:/情報提供基盤/EXA-X6系/汎用集計・フロア取込系/ユニット別フロアマスタテンポラリネット/ユニット別フロアマスタテンポラリ作成","END","20220106111731","20220106111738","100",25,"A0","AJSROOT1:/情報提供基盤/EXA-X6系/汎用集計・フロア取込系","A1","ユニット別フロアマスタテンポラリネット","A2","ユニット別フロアマスタテンポラリ作成","A3","@52X6013","ACTION_VERSION","0600","B0","n","B1","1","B2","jp1admin","B3","psd240","B4","a","C0","PSC642","C1","","C2","281767","C3","PSC642","C4","0","C5","0","C6","r","E0","1641435451","E1","1641435458","E2","0","E3","0","H2","578828","H3","pj","H4","q","PLATFORM","NT",     Extraction used: (?:[^,]+,){14}(?<alert_description>[^,]+),   Please help extract the highlighted fields.  Thank you
Hi All, I have a query to get the result of the list of filesystems and their respective disk usage details as below: File_System  Total in GB   Used in GB   Available in GB   Disk_Usage in % /var... See more...
Hi All, I have a query to get the result of the list of filesystems and their respective disk usage details as below: File_System  Total in GB   Used in GB   Available in GB   Disk_Usage in % /var                   10                    9.2                   0.8                           92 /opt                   10                    8.1                   1.9                          81 /logs                 10                    8.7                   1.3                          87 /apps                10                    8.4                   1.6                          84 /pcvs                10                    9.4                    0.6                         94 I need to create a multiselect option with the disk usage values to get the above table for a range of values. For e.g. If I select 80 in the multiselect it will show the table with values of disk usage in the range 76-80, then if I select 80 & 90 in the multiselect it will show the table with values of disk usage in the range 76-80 & 86-90 and so on. I created the multiselect with token as "DU" and created the search query for the table as: .... | where ((Disk_Usage<=$DU$ AND Disk_Usage>($DU$-5)) OR (Disk_Usage<=$DU$ AND Disk_Usage>($DU$-5))) | table File_System,Total,Used,Available,Disk_Usage | rename Total as "Total in GB" Used as "Used in GB" Available as "Available in GB" Disk_Usage as "Disk_Usage in %" With the above query I am able to get the results when I run a search with two different values (e.g. 100 & 65) for $DU$ in (Disk_Usage<=$DU$ AND Disk_Usage>($DU$-5)). But with this query I am not able to get the table in the dashboard when I am using multiple values. Please help me with the delimiter to be added or help create a query so that upon selecting multiple options in multiselect will give the table for a range of disk usage values.
hi i have difficulties to understandand whats exacty do the field DEST_KEY and FORMAT on my host in stanza 1 and FORMAT in stanza 2 I have read the documentation but..... Thanks in advance [rfc... See more...
hi i have difficulties to understandand whats exacty do the field DEST_KEY and FORMAT on my host in stanza 1 and FORMAT in stanza 2 I have read the documentation but..... Thanks in advance [rfc5424_host] DEST_KEY = MetaData:Host REGEX = <\d+>\d{1}\s{1}\S+\s{1}(\S+) FORMAT = host::$1 [host_as_src] SOURCE_KEY = host REGEX = (.+) FORMAT = src::"$1