All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a Holiday.csv file that imports dates for specific holiday dates. example: 2024-04-01 2026-12-29 2028-06-26 I am working on muting alerts during a day after the dates. So, if the h... See more...
I have a Holiday.csv file that imports dates for specific holiday dates. example: 2024-04-01 2026-12-29 2028-06-26 I am working on muting alerts during a day after the dates. So, if the holiday was on Monday, it shouldn't fire on Tuesday, if the holiday was on Tuesday, it shouldn't fire on Weds, etc. The weird one is if the holiday is on a Friday, then we actually don't want the alert to fire on Monday this is what I have for my query.  just not sure how I would add in the Friday scenario if I did   strftime(_time+86400,"%Y-%m-%d")  ```to add one day```  index=<search> | eval Date=strftime(_time,"%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output Holiday | eval should_alert=if((holidays.csv!="" AND isnull(Holiday)), "Yes", "No") | table Date should_alert | where should_alert="Yes" If something like this is possible in Splunk, I think it would work: if holiday is a Friday, add 3 days, otherwise add 1 day
Running a lookup where I have verified the fields exist and match and its not returning an output field. So, I verified by running the lookup by itself and it still doesn't match. I have checked perm... See more...
Running a lookup where I have verified the fields exist and match and its not returning an output field. So, I verified by running the lookup by itself and it still doesn't match. I have checked permissions, ran the search from the app it belongs to. I can view the lookup with "| inputlookup <name>".   Example running the lookup on itself: | inputlookup myfile | table a, b | lookup myfile a OUTPUT b AS c | table a, b, c c always shows as empty for this one lookup
splunk Add-on for windows 8.8.0 has renaming of sourcetype to lower case xmlwineventlog and wineventlog   is this normal because I am familiar with using XmlWinEventLog & WinEventLog formatted sour... See more...
splunk Add-on for windows 8.8.0 has renaming of sourcetype to lower case xmlwineventlog and wineventlog   is this normal because I am familiar with using XmlWinEventLog & WinEventLog formatted sourcetype. 
Howdy, I'm building out some alerting in Splunk ES, and created a new correlation search. That is all working, but I'm unable to pass my eval as a value into email alert. What I have: | eval al... See more...
Howdy, I'm building out some alerting in Splunk ES, and created a new correlation search. That is all working, but I'm unable to pass my eval as a value into email alert. What I have: | eval alert_message=range.":".sourcetype." log source has not checked in ".'Communicated Minutes Ago'." minutes. On index=".index.". Latest Event:".'Latest Event' | table alert_message Just running the search works, the table is there and looks correct. I've tried variations of $alert_message$ with and without quotes, but the alert_message never gets passed to the email alert. I haven't tried to generate a notable, but I'm guessing I'll have the same issue. I feel like I'm missing something easy here...
Hi All,   I am trying to parse raw data with json elements to proper JSON format in Splunk. I have tried multiple props.conf but failed to parse it as per expected output. Below I have attached the... See more...
Hi All,   I am trying to parse raw data with json elements to proper JSON format in Splunk. I have tried multiple props.conf but failed to parse it as per expected output. Below I have attached the data coming as a single event on Splunk and expected data what we want to see. Can someone please correct my props.conf ?   Events on Splunk with default sourcetype   {"messageType":"DATA_MESSAGE","owner":"381491847064","logGroup":"tableau-cluster","logStream":"SentinelOne Agent Logs","subscriptionFilters":["splunk"],"logEvents":[{"id":"38791169637844522680841662226148491272212438883591651328","timestamp":1739456206172,"message":"[2025-02-13 15:16:41.413885] [110775] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24388.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24388.tmp: No such file or directory\n[2025-02-13 15:16:42.213970] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24390.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24390.tmp: No such file or directory\n[2025-02-13 15:16:42.214870] [110830] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24389.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24389.tmp: No such file or directory\n[2025-02-13 15:16:42.218488] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24391.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24391.tmp: No such file or directory\n[2025-02-13 15:16:43.815051] [110827] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24392.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24392.tmp: No such file or directory\n[2025-02-13 15:16:44.617525] [110773] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24394.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24394.tmp: No such file or directory\n[2025-02-13 15:16:45.413954] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24393.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24393.tmp: No such file or directory"},{"id":"38791169749325947928296247310685546917181598051987750913","timestamp":1739456211171,"message":"[2025-02-13 15:16:47.014642] [110770] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24395.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24395.tmp: No such file or directory\n[2025-02-13 15:16:47.813934] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24396.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24396.tmp: No such file or directory\n[2025-02-13 15:16:47.814459] [110828] [warning] DV process create: Couldn't fetch grandparent process of process 26395 from the data model\n[2025-02-13 15:16:47.815399] [110828] [warning] DV process create: Couldn't fetch grandparent process of process 26396 from the data model\n[2025-02-13 15:16:47.816855] [110827] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24397.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24397.tmp: No such file or directory\n[2025-02-13 15:16:48.616944] [110825] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream   Expected Output with fiedls extraction   { "messageType": "DATA_MESSAGE", "owner": "381491847064", "logGroup": "tableau-cluster", "logStream": "SentinelOne Agent Logs", "subscriptionFilters": ["splunk"], "logEvents": [ { "id": "38791169637844522680841662226148491272212438883591651328", "timestamp": 1739456206172, "message": "[2025-02-13 15:16:41.413885] [110775] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24388.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24388.tmp: No such file or directory\n[2025-02-13 15:16:42.213970] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24390.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/hyper_transient.112335.24390.tmp: No such file or directory\n[2025-02-13 15:16:42.214870] [110830] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24389.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24389.tmp: No such file or directory\n[2025-02-13 15:16:42.218488] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24391.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24391.tmp: No such file or directory\n[2025-02-13 15:16:43.815051] [110827] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24392.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24392.tmp: No such file or directory\n[2025-02-13 15:16:44.617525] [110773] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24394.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24394.tmp: No such file or directory\n[2025-02-13 15:16:45.413954] [110823] [error] full_file_overwrite_flag: failed to stat /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24393.tmp: stat failed on path: /app/tableau/tableau_data/data/tabsvc/temp/hyper_0.20233.24.0718.1630/copyexternalstream.112335.24393.tmp: No such file or directory" } ] }   Props.conf   [json_splunk_logs] # Define the source type for the logs sourcetype = json_splunk_logs # Time configuration - Parse the timestamp in your message TIME_FORMAT = %Y-%m-%d %H:%M:%S.%6N TIME_PREFIX = \["message"\] \[ # Specify how to break events in the multiline message SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) # Event timestamp extraction DATETIME_CONFIG = NONE # JSON parsing - This tells Splunk to extract fields from JSON automatically KV_MODE = json # The timestamp is embedded in the message, so the following configuration is necessary for time extraction. EXTRACT_TIMESTAMP = \["messageType":"DATA_MESSAGE","owner":"\d+","logGroup":"\w+","logStream":"\w+","subscriptionFilters":\[\\"splunk\\"\],\s"timestamp":(\d+),".*?
Can someone please tell me where I can obtain a Trial Enterprise License from?
when i upgrade ES to 8.0.2 i missed the "Short ID " button in the Additional Field, also i can't search about the case id instead of time 
Hello Splunk colleagues! I'm trying to create a new correlation search that generates a notable event, and uses a field I generate for the title.  The title field in the notable indicates I can us... See more...
Hello Splunk colleagues! I'm trying to create a new correlation search that generates a notable event, and uses a field I generate for the title.  The title field in the notable indicates I can use variable substitution, and I've verified that the field is being created for every event the correlation search generates.  the field is called my_rule_title In the notable event, I am putting in $my_rule_title$ and when the notable is generated, the rule title on incident review literally says "$my_rule_title$" and not the contents of the field my_rule_title. what am I doing wrong to get the rule title in incident review to display the value of my_rule_title?  the other variable substitutions I'm doing in the correlation search for $description$ and $urgency$ are working as expected, just not the title.
Hello, I have the below SPL where I am looking to fetch the user accounts that have not logged in for 30 days or more but I am not seeing any results, Can someone please help me with this query if e... See more...
Hello, I have the below SPL where I am looking to fetch the user accounts that have not logged in for 30 days or more but I am not seeing any results, Can someone please help me with this query if everything is good. index=windows sourcetype=* EventCode=4624  | stats latest(_time) as lastLogon by Account_Name | eval days_since_last_logon = round((now() - lastLogon) / 86400, 0) | where days_since_last_logon > 30 | table Account_Name, days_since_last_logon Thanks in advance
Setup currently is LM, indexer, SH, and DS are all on the same host. I'm currently using Splunk Enterprise Version: 9.4. I get about 10 messages a second logged in the splunkd.log with the following... See more...
Setup currently is LM, indexer, SH, and DS are all on the same host. I'm currently using Splunk Enterprise Version: 9.4. I get about 10 messages a second logged in the splunkd.log with the following error: ERROR BTree [1001653 IndexerTPoolWorker-3] - 0th child has invalid offset: indexsize=67942584 recordsize=166182200, (Internal) ERROR BTreeCP [1001653 IndexerTPoolWorker-3] - addUpdate CheckValidException caught: BTree::Exception: Validation failed in checkpoint I have noticed the btree_index.dat and btree_records.dat in /opt/splunk/data/fishbucket/splunk_private_db are re-created every few seconds. From what I can tell, after they get get to a certain point, those files are copied into the corrupt directory and are deleted. It then starts all over. I have tried to shutdown splunk and copy snapshot files over, but when I restart splunk they are overwritten and we start the whole loop of files getting created and then copied to corrupt.     I tried a repair on the data files with the following command: splunk cmd btprobe -d /opt/splunk/data/fishbucket/splunk_private_db -r which returned the following output no root in /opt/splunk/data/fishbucket/splunk_private_db/btree_index.dat with non-empty recordFile /opt/splunk/data/fishbucket/splunk_private_db/btree_records.dat recovered key: 0xd3e9c1eb89bdbf3e | sptr=1207 Exception thrown: BTree::Exception: called debug on btree that isn't open!   It is totally possible there is some corruption somewhere. We did have a filesystem issue a while back. I had to do a fsck and there were a few files that I removed.  As far as data I can't seem to find out where the problem might be.  In splunk search I appear to have incomplete data in the _internal index. I can't view licensing and Data Quality are empty and have no data.   Do I have some corrupt data somewhere which is causing problems with my btree index data? How would I go about finding the cause of this problem?
Hi everyone! After upgrading to version 3.8.1, I got a bunch of errors: In the Security Content i get the following:    app/Splunk_Security_Essentials/security_content 404 (Not Found)Understand th... See more...
Hi everyone! After upgrading to version 3.8.1, I got a bunch of errors: In the Security Content i get the following:    app/Splunk_Security_Essentials/security_content 404 (Not Found)Understand this errorAI web-client-content-script.js:2 Uncaught (in promise) Error: Access to storage is not allowed from this context.   On the Data Inventory page i get the following: Error! Received the following error: Description Message Error occurred while grabbing data_inventory_products   Traceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_Security_Essentials/bin/generateShowcaseInfo.py", line 473, in handle if row['stage'] in ['all-done', 'step-review', 'step-eventsize', 'step-volume', 'manualnodata']: KeyError: 'stage'   There are also a couple of apps: splunk_essentials_8_2 and splunk_essentials_9_0 are both enabled. Does anyone know how to fix this? Thanks!
Qualys logs are not flowing to Splunk. This is the error: TA-QualysCloudPlatform: 2025-02-13 00:00:49 PID=3604316 [MainThread] ERROR: No credentials found. Cannot continue. How to debug it. Note: ... See more...
Qualys logs are not flowing to Splunk. This is the error: TA-QualysCloudPlatform: 2025-02-13 00:00:49 PID=3604316 [MainThread] ERROR: No credentials found. Cannot continue. How to debug it. Note: We recently updated the expired credentials of Splunk user account in qualys,new credentials are working fine with the GUI (https://qualysguard.qg2.apps.qualys.com/fo/login.php). However, the Splunk add-on is still not accepting them, and logs are not flowing.
Trying to build a search that will leverage ldapsearch to pull a current list of users that are members of a specific list of groups.  For example some groups may be CN=Schema Admins,OU=groups,DC=... See more...
Trying to build a search that will leverage ldapsearch to pull a current list of users that are members of a specific list of groups.  For example some groups may be CN=Schema Admins,OU=groups,DC=domain,DC=xxx CN=Enterprise Admins,OU=group1,OU=groups,DC=domain,DC=xxx CN=Domain Admins Admins,OU=group1,OU=groups,DC=domain,DC=xxx This rex (?<=CN=)[^,]+ will grab the group name but having trouble pulling this all together This needs to search any group we want to include by specific name and then table out a list of the users that are members of each group sorted by the group name  
I recently had cause to ingest Oracle Unified Directory logs in ODL format. I'm performing pretty simple file-based ingestion and didn't want to go down the path of DB connect, etc. so it made sense ... See more...
I recently had cause to ingest Oracle Unified Directory logs in ODL format. I'm performing pretty simple file-based ingestion and didn't want to go down the path of DB connect, etc. so it made sense to write my own parsing logic. I didn't have a lot of luck finding things precisely relevant to my logs and I could have benefited from the concrete example I'll lay out below, so here it is, hopefully it will help others. This specific example is OUD logs, but it's relevant to any arbitrary _KEY_n, _VAL_n extraction in Splunk. Log format The ODL logs are mostly well structured, but the format is a little odd to work with. Most fields are bracketed [FIELD_1] The log starts with "header" fields which are fixed and contain single values The next section of the log has an arbitrary number of bracketed key value pairs: [KEY: VALUE] The final piece is a non-structured string that may also contain brackets and colons Below is a small sample of a few different types of logs: [2016-12-30T11:08:46.216-05:00] [ESSBASE0] [NOTIFICATION:16] [TCP-59] [TCP] [ecid: 1482887126970,0] [tid: 140198389143872] Connected from [::ffff:999.999.99.999] [2016-12-30T11:08:27.60-05:00] [ESSBASE0] [NOTIFICATION:16] [AGENT-1001] [AGENT] [ecid: 1482887126970,0] [tid: 140198073563456] Received client request: Clear Application/Database (from user [sampleuser@Native Directory]) [2016-12-30T11:08:24.302-05:00] [PLN3] [NOTIFICATION:16] [REQ-91] [REQ] [ecid: 148308120489,0] [tid: 140641102035264] [DBNAME: SAMPLE] Received Command [SetAlias] from user [sampleuser@Native Directory] [2016-12-30T11:08:26.932-05:00] [PLN3] [NOTIFICATION:16] [SSE-82] [SSE] [ecid: 148308120489,0] [tid: 140641102035264] [DBNAME: SAMPLE] Spreadsheet Extractor Big Block Allocs -- Dyn.Calc.Cache : [202] non-Dyn.Calc.Cache : [0] [2025-02-07T15:18:35.014-05:00] [OUD] [TRACE] [OUD-24641549] [PROTOCOL] [host: testds01.redacted.fake] [nwaddr: redacted] [tid: 200] [userId: ldap] [ecid: 0000PJY46q64Usw5sFt1iX1bdZJL0003QL,0:1] [category: RES] [conn: 1285] [op: 0] [msgID: 1] [result: 0] [authDN: uid=redacted,ou=redacted,o=redacted,c=redacted] [etime: 0] BIND [2025-02-07T15:18:35.014-05:00] [OUD] [TRACE] [OUD-24641548] [PROTOCOL] [host: testds01.redacted.fake] [nwaddr: redacted] [tid: 200] [userId: ldap] [ecid: 0000PJY46q64Usw5sFt1iX1bdZJL0003QK,0:1] [category: REQ] [conn: 1285] [op: 0] [msgID: 1] [bindType: SIMPLE] [dn: uid=redacted,ou=redacted,o=redacted,c=redacted] BIND Configuration files We opted to use the sourcetype "odl:oud". This allows future extension into other ODL formatted logs. props.conf Note the 3-part REPORT processing. This utilizes regexes in transforms.conf to process the headers, the key-value pairs, and the trailing message. This could then be extended to utilize portions of that extraction for other log types that fall under ODL format. [odl:oud] REPORT-oudparse = extractOUDheader, extractODLkv, extractOUDmessage  transforms.conf This was the piece I could have used some concrete examples of. These three report extractions allow a pretty flexible ingestion of all parts of the logs. The key here was separating the _KEY_1 _VAL_1 extraction into its own process with the REPEAT_MATCH flag set # extract fixed leading values from Oracle Unified Directory log message [extractOUDheader] REGEX = ^\[(?<timestamp>\d{4}[^\]]+)\] \[(?<organization_id>[^\]]+)\] \[(?<message_type>[^\]]+)\] \[(?<message_id>[^\]]+)\] \[(?<component>[^\]]+?)\] # extract N number of key-value pairs from Oracle Diagnostic Logging log body [extractODLkv] REGEX = \[(?<_KEY_1>[^:]+?): (?<_VAL_1>[^\]]+?)\] REPEAT_MATCH = true # extract trailing, arbitrary message text from Oracle Unified Directory log [extractOUDmessage] REGEX = \[[^:]+: [^\]]+\] (?<message>[^\[].*)$.    The final regex there looks for a preceding key-value pair, NOT followed by a new square bracket, with any arbitrary characters thereafter to end the line. I initially tried to make one regex to perform all of this, which doesn't really work without being very prescriptive in the structure. With an unknown number of key-value pairs, this is not ideal and this solution seemed much more Splunk-esque. I hope this helps someone else!
Hi Experts, The file ACF2DS_Data.csv contains columns including TIMESTAMP, DS_NAME, and JOBNAME. I need to match the DS_NAME column from this file with the LKUP_DSN column in DSN_LKUP.csv to obtain... See more...
Hi Experts, The file ACF2DS_Data.csv contains columns including TIMESTAMP, DS_NAME, and JOBNAME. I need to match the DS_NAME column from this file with the LKUP_DSN column in DSN_LKUP.csv to obtain the corresponding events from ACF2DS_Data.csv. The query provided below is not working as expected. Could you please assist me in resolving the issue with the query? source="*ACF2DS_Data.csv" index="idxmainframe" earliest=0 latest=now [search source="*DSN_LKUP.csv" index="idxmainframe" earliest=0 latest=now | eval LKUP_DSN = "%".LKUP_DSN."%" | where like(DS_NAME,LKUP_DSN) | table DS_NAME] | table TIMESTAMP, DS_NAME, JOBNAME Thanks, Ravikumar
Hi Peers. How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? If we use Otel collector for data exporting to AppD..is it still required to use AppD a... See more...
Hi Peers. How is the compatibility between OTel collector and AppDynamics, is it efficient and recommendable? If we use Otel collector for data exporting to AppD..is it still required to use AppD agents also? How the Licensing will work if we use Otel for data exporting to AppD? Otel collector is compatible with both on premise and SaaS Environment of AppD?. Thanks
i having some issues to populate the traffic center dashboard in splunk ES. It's showing as "Cannot read properties of undefined (reading 'map')". anyone have any solutions?
I would like to know if it possible to use the same machine for both a Deployment Server and a Heavy Forwarder. If so, would I need two different licenses for this machine? Or can I simply use the Fo... See more...
I would like to know if it possible to use the same machine for both a Deployment Server and a Heavy Forwarder. If so, would I need two different licenses for this machine? Or can I simply use the Forwarder license while utilizing the functions of both the Deployment Server and Heavy Forwarder?Thank you so much
Hello I have this search | rest splunk_server=MSE-SVSPLUNKI01 /services/server/status/resource-usage/hostwide | eval cpu_usage = cpu_system_pct + cpu_user_pct | where cpu_usage > 10 I want to t... See more...
Hello I have this search | rest splunk_server=MSE-SVSPLUNKI01 /services/server/status/resource-usage/hostwide | eval cpu_usage = cpu_system_pct + cpu_user_pct | where cpu_usage > 10 I want to this search to give a graph visualization of total cpu_usage every 4 hours.
What is the definition of large? Is it measured in total bytes? Number of records? And in either case how much?