All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As a rule of thumb, the base search should be a transforming search (i.e. containing stats command or timechart). You can get away with non-transforming search but you should explicitly list the fiel... See more...
As a rule of thumb, the base search should be a transforming search (i.e. containing stats command or timechart). You can get away with non-transforming search but you should explicitly list the fields which you want to retain from your base search for later use by postprocess searches. And you definitely don't want too much data returned from the base search (a SH will have to keep this result set for post-processing after all). So it kinda depends on your whole picture because that's not always about the common denominator. For example if you have one search index=a | stats count by fieldb and another one index=a | stats count by fieldc The best base search would be not index=a | fields fieldb fieldc But rather index=a | stats count by fieldb fieldc And your post-process searches would just do | stats sum(count) by fieldb and | stats sum(count) by fieldc respectively.
OK. Where are you exporting this from? Splunk should enclose the values in double quotes (and use double double quotes for the double quotes within a field value) when exporting search results. So w... See more...
OK. Where are you exporting this from? Splunk should enclose the values in double quotes (and use double double quotes for the double quotes within a field value) when exporting search results. So where are you exporting from/to?
Hi all, Today I've updated Splunk from version 9.2.2 to 9.3.0. All seems to be good except Alert Manager Enterprise 3.0.8 ( is not working anymore.) I'm kinda new into splunk, so I don't know wher... See more...
Hi all, Today I've updated Splunk from version 9.2.2 to 9.3.0. All seems to be good except Alert Manager Enterprise 3.0.8 ( is not working anymore.) I'm kinda new into splunk, so I don't know where to start. The error we got is:  Unable to initialize modular input "tag_keeping" defined in the app "alert_manager_enterprise": Introspecting scheme=tag_keeping: script running failed (PID 4085525 exited with code1) Please help me Kind regards, Glenn
Because... it's Splunk math (I suppose it has something to do with float handling underneath). See this run-anywhere example | makeresults count=10 | streamstats count | map search="|makeresults... See more...
Because... it's Splunk math (I suppose it has something to do with float handling underneath). See this run-anywhere example | makeresults count=10 | streamstats count | map search="|makeresults count=$count$| streamstats count as count2 | eval count=$count$" | eval count=count/10, count2=count2/10 | eval diff=count-count2 | table count count2 diff  
Hello, I have a app and this is its default XML <nav search_view="search" color="#65A637"> <view name="search" default="true"/> <view name="securehr"/> <view name="secure_group_members"/> <view... See more...
Hello, I have a app and this is its default XML <nav search_view="search" color="#65A637"> <view name="search" default="true"/> <view name="securehr"/> <view name="secure_group_members"/> <view name="changes_on_defined_secure_groups"/> <view name="group_membership"/> <view name="group_membership_v2"/> <collection label="Reports"> <view name="SecureFolder-Report for APT" >Report for APT</view> <view name="SecureFolder-Report for Betriebsrat" default="true">Report for Betriebsrat</view> <view name="SecureFolder-Report for CMA" default="true">Report for CMA</view> <view name="SecureFolder-Report for HR" default="true">Report for HR</view> <view name="SecureFolder-Report for IT" default="true">Report for IT</view> <view name="SecureFolder-Report for QUE" default="true">Report for QUE</view> <view name="SecureFolder-Report for Vorstand" default="true"> Report for Vorstand</view> </collection> </nav> Now from this xml ir should show 7 sections on navigation panel. But it does not show report section , can anyone help?
Hi,   This thing is getting me crazy. I am running Splunk 9.2.1 and I have the following table: amount compare frac_type fraction integer 0.41 F Number 0.41 0 4.18 F Number 0.... See more...
Hi,   This thing is getting me crazy. I am running Splunk 9.2.1 and I have the following table: amount compare frac_type fraction integer 0.41 F Number 0.41 0 4.18 F Number 0.18 4 0.26 F Number 0.26 0 0.34 F Number 0.34 0 10.60 F Number 0.60 10 0.11 F Number 0.11 0 2.00 F Number 0.00 2 3.49 F Number 0.49 3 10.58 F Number 0.58 10 2.00 F Number 0.00 2 1.02 F Number 0.02 1 15.43 F Number 0.43 15 1.17 F Number 0.17 1   And these are the evals I used to calculate the fields: | eval integer = floor(amount) | eval fraction = amount - floor(amount) | eval frac_type = typeof(fraction) | eval compare = if(fraction = 0.6, "T", "F")   Now, I really can't understand how the "compare" field is always false.... I was expecting it to output TRUE on row 5 with amount = 10.60, which means fraction = 0.6, but it does not. What am I doing wrong here? Why "compare" evaluates to FALSE on row 5? I tried to change 0.6 with 0.60 (you never know), but no luck.   If you want you can try this run anywhere search, which gives me the same result:   | makeresults | eval amount = 10.6 | eval integer = floor(amount) | eval fraction = amount - floor(amount) | eval frac_type = typeof(fraction) | eval compare = if(fraction = 0.6, "T", "F")   Can you help me?     Thank you in advance, Tommaso
OK. Let me quote from the OpenSSL vulnerability description. "Impact summary: A buffer overread can have a range of potential consequences such as unexpected application beahviour or a crash. In par... See more...
OK. Let me quote from the OpenSSL vulnerability description. "Impact summary: A buffer overread can have a range of potential consequences such as unexpected application beahviour or a crash. In particular this issue could result in up to 255 bytes of arbitrary private data from memory being sent to the peer leading to a loss of confidentiality. However, only applications that directly call the SSL_select_next_proto function with a 0 length list of supported client protocols are affected by this issue. This would normally never be a valid scenario and is typically not under attacker control but may occur by accident in the case of a configuration or programming error in the calling application." Read the last sentence. Over and over again. If unsure - verify if you can exploit this potential vulnerability. Otherwise, stop worrying about this.
Splunk 9.3.0 has the fix.
Hi Based on a Multiselect  reading from   index="pm-azlm_internal_prod_events" sourcetype="azlm"   I define a token with the name   opc_t     This token can be used without any problems to ... See more...
Hi Based on a Multiselect  reading from   index="pm-azlm_internal_prod_events" sourcetype="azlm"   I define a token with the name   opc_t     This token can be used without any problems to filter further down in the dashboard data read from the same index (top 3 lines in the code below).    <query>index="pm-azlm_internal_prod_events" sourcetype="azlm" $opc_t$ $framenum$ | strcat opc "_" frame_num UNIQUE_ID | dedup _time UNIQUE_ID | append [ search index="pm-azlm_internal_dev_events" sourcetype="azlm-dev" ocp=$opc_t|s$ | strcat ocp "-j_" fr as UNIQUE_ID | dedup UNIQUE_ID] | timechart span=12h aligntime=@d limit=0 count by UNIQUE_ID | sort by _time DESC </query>   BUT and here's my problem: using the same token on a different index (used in the append above) will provide no results at all.  One (nasty) detail, the field names in both Indexes are slightly different. In    index="pm-azlm_internal_prod_events"   the field name I need to filter on ist called    opc     In the second index   pm-azlm_internal_dev_events   the field name is   ocp     Dear Experts: what do I need to change on the 2nd query, to be able to use the same token for filtering?  
I see this behaviour, too, also for another process coming from the ITSI app:   /opt/splunk/etc/apps/SA-ITOA/bin/command_health_monitor.py Besides killing processes or restarting splunk as a workar... See more...
I see this behaviour, too, also for another process coming from the ITSI app:   /opt/splunk/etc/apps/SA-ITOA/bin/command_health_monitor.py Besides killing processes or restarting splunk as a workaround, do you know whether there are efforts to finally resolve this bug? Thanks, Jan  
We are also flagged by this Patch Vulnerability by our Tenable Scanning Results on Compliance Portal.   We were under an assumption that the Splunk Universal Forwarder release of Version 9.2.2 will... See more...
We are also flagged by this Patch Vulnerability by our Tenable Scanning Results on Compliance Portal.   We were under an assumption that the Splunk Universal Forwarder release of Version 9.2.2 will have this fix incorporated, but apparently seems like that is not the case.   Any idea when could we expect a fix for this as the due date for this exposure has already passed (July 28th 2024)?   Thanks, Vishwa
88
How would you do this? I've seen solutions that replace the commas with null but I'm not sure how to encapsulate field values with commas in a double quote, especially with field values that have mul... See more...
How would you do this? I've seen solutions that replace the commas with null but I'm not sure how to encapsulate field values with commas in a double quote, especially with field values that have multiple commas inside.
I nabbed some searches from our license server/monitoring console and placed them in the search head cluster so that they would be available to some users which should not have access to the monitori... See more...
I nabbed some searches from our license server/monitoring console and placed them in the search head cluster so that they would be available to some users which should not have access to the monitoring console. The resulting dashboard overview would benefit (heavily) from a "base search" to feed the different panels. However, some of them use "subsearches" and I cannot figure out if i can and then how to combine the two. There are a couple of these searches where you pull some license usage data and available license for different pools or the total license available (hence using the stacksz when checking "all" pools ("*")).   index=_internal source=*license_usage.log* type="RolloverSummary" pool=$pool$ | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" by pool fixedrange=false | join type=outer _time [ search index=_internal source=*license_usage.log* type="RolloverSummary" pool=$pool$ | bin _time span=1d | dedup _time stack | eval licenzz=if("$pool$"=="*", stacksz, poolsz) | stats latest(licenzz) AS "Available license" by _time ] | fields - Temp | foreach * [ eval &lt;&lt;FIELD&gt;&gt;=round('&lt;&lt;FIELD&gt;&gt;'/1024/1024/1024, 3) ]   Different panels use different "stats" and "evals", different "AS" naming and more. There is however one consistent part, the initial search: index=_internal source=*license_usage.log* type="RolloverSummary" pool=$pool$ I figured it would be a good ide to use a base search with this, though I cannot figure out how. Using a larger search including the join and subsearch "sort of works". But getting all the different "stats", "evals" and "AS" to produce the expected output is a nightmare. The initial and smaller base search above is the smallest common denominator. But then I cant figure out how to reference this base in the subsearch for the join? All suggestions are welcome. All the best
Have you tried using double quotes around the field values with commas in e.g. field 1,"field2, with commas", field3
I just have the logs ingested in Splunk... but not very much information to do real investigation... Kerio is what it is... 
Hi @Nawab , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
The GWS is running for the whole company. Is it possible to only input a part of users' logs into Splunk, using add-on for GWS or some filter function somewhere? If I only want to monitor members f... See more...
The GWS is running for the whole company. Is it possible to only input a part of users' logs into Splunk, using add-on for GWS or some filter function somewhere? If I only want to monitor members from a specific department of my organization, how can I filter on GWS? I think logs could be filtered while sending to Splunk by GCP, but what about directly using GWS add-on? Maybe this question is more about Google Service.... Anyone familiar please kindly help, thank you! 
i am testing this data by uploading in splunk cloud but i am not getting events in proper format when i am selectin sourcetype as json in settings given screen shot [ { "check_error": ... See more...
i am testing this data by uploading in splunk cloud but i am not getting events in proper format when i am selectin sourcetype as json in settings given screen shot [ { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "", "check_key_3": "", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "General_parameters", "check_status": "OK", "database_major_version": "19", "database_minor_version": "0", "database_name": "C2N48617", "database_version": "19.0.0.0.0", "extract_date": "30/07/2024 08:09:06", "host_name": "flosclnrhv03.pharma.aventis.com", "instance_name": "C2N48617", "script_version": "1.0" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN8944", "check_key_3": "LIVE2459_VAL", "check_key_4": "AQ$_Q_TASKREPORTWORKTASK_TAB_E", "check_key_5": "", "check_key_6": "", "check_name": "queue_mem_check", "check_status": "OK", "extract_date": "30/07/2024 08:09:06", "queue_name": "AQ$_Q_TASKREPORTWORKTASK_TAB_E", "queue_owner": "LIVE2459_VAL", "queue_sharable_mem": "4072", "script_version": "1.0" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN8944", "check_key_3": "LIVE2459_VAL", "check_key_4": "AQ$_Q_PIWORKTASK_TAB_E", "check_key_5": "", "check_key_6": "", "check_name": "queue_mem_check", "check_status": "OK", "extract_date": "30/07/2024 08:09:06", "queue_name": "AQ$_Q_PIWORKTASK_TAB_E", "queue_owner": "LIVE2459_VAL", "queue_sharable_mem": "4072", "script_version": "1.0" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN8944", "check_key_3": "LIVE2459_VAL", "check_key_4": "AQ$_Q_LABELWORKTASK_TAB_E", "check_key_5": "", "check_key_6": "", "check_name": "queue_mem_check", "check_status": "OK", "extract_date": "30/07/2024 08:09:06", "queue_name": "AQ$_Q_LABELWORKTASK_TAB_E", "queue_owner": "LIVE2459_VAL", "queue_sharable_mem": "4072", "script_version": "1.0" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN8944", "check_key_3": "LIVE2459_VAL", "check_key_4": "AQ$_Q_PIPROCESS_TAB_E", "check_key_5": "", "check_key_6": "", "check_name": "queue_mem_check", "check_status": "OK", "extract_date": "30/07/2024 08:09:06", "queue_name": "AQ$_Q_PIPROCESS_TAB_E", "queue_owner": "LIVE2459_VAL", "queue_sharable_mem": "4072", "script_version": "1.0" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "", "check_key_3": "SYS", "check_key_4": "ALERT_QUE", "check_key_5": "", "check_key_6": "", "check_name": "queue_mem_check", "check_status": "OK", "extract_date": "30/07/2024 08:09:06", "queue_name": "ALERT_QUE", "queue_owner": "SYS", "queue_sharable_mem": "0", "script_version": "1.0" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "", "check_key_3": "SYS", "check_key_4": "AQ$_ALERT_QT_E", "check_key_5": "", "check_key_6": "", "check_name": "queue_mem_check", "check_status": "OK", "extract_date": "30/07/2024 08:09:06", "queue_name": "AQ$_ALERT_QT_E", "queue_owner": "SYS", "queue_sharable_mem": "0", "script_version": "1.0" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "", "check_key_3": "", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "fra_check", "check_status": "OK", "extract_date": "30/07/2024 08:09:06", "flash_in_gb": "40", "flash_reclaimable_gb": "0", "flash_used_in_gb": ".47", "percent_of_space_used": "1.17", "script_version": "1.0" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "", "check_key_3": "", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "processes", "check_status": "OK", "extract_date": "30/07/2024 08:09:06", "process_current_value": "299", "process_limit": "1000", "process_percent": "29.9", "script_version": "1.0" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "", "check_key_3": "", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "sessions", "check_status": "OK", "extract_date": "30/07/2024 08:09:06", "script_version": "1.0", "sessions_current_value": "299", "sessions_limit": "1536", "sessions_percent": "19.47" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "SYSTEM", "check_key_3": "", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "cdb_tbs_check", "check_status": "OK", "current_use_mb": "1355", "extract_date": "30/07/2024 08:09:06", "percent_used": "2", "script_version": "1.0", "tablespace_name": "SYSTEM", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "SYSAUX", "check_key_3": "", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "cdb_tbs_check", "check_status": "OK", "current_use_mb": "23635", "extract_date": "30/07/2024 08:09:06", "percent_used": "36", "script_version": "1.0", "tablespace_name": "SYSAUX", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "UNDOTBS1", "check_key_3": "", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "cdb_tbs_check", "check_status": "OK", "current_use_mb": "22", "extract_date": "30/07/2024 08:09:06", "percent_used": "0", "script_version": "1.0", "tablespace_name": "UNDOTBS1", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "USERS", "check_key_3": "", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "cdb_tbs_check", "check_status": "OK", "current_use_mb": "4", "extract_date": "30/07/2024 08:09:06", "percent_used": "0", "script_version": "1.0", "tablespace_name": "USERS", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN8944", "check_key_3": "USERS", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "1176", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1NN8944", "percent_used": "4", "script_version": "1.0", "tablespace_name": "USERS", "total_physical_all_mb": "32767" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1S48633", "check_key_3": "SYSTEM", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "784", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1S48633", "percent_used": "1", "script_version": "1.0", "tablespace_name": "SYSTEM", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN8944", "check_key_3": "SYSAUX", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "1549", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1NN8944", "percent_used": "2", "script_version": "1.0", "tablespace_name": "SYSAUX", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1S48633", "check_key_3": "USERS", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "1149", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1S48633", "percent_used": "2", "script_version": "1.0", "tablespace_name": "USERS", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN8944", "check_key_3": "UNDOTBS1", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "60", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1NN8944", "percent_used": "0", "script_version": "1.0", "tablespace_name": "UNDOTBS1", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1S48633", "check_key_3": "SYSAUX", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "7803", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1S48633", "percent_used": "12", "script_version": "1.0", "tablespace_name": "SYSAUX", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN8944", "check_key_3": "SYSTEM", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "705", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1NN8944", "percent_used": "1", "script_version": "1.0", "tablespace_name": "SYSTEM", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN8944", "check_key_3": "INDX", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "378", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1NN8944", "percent_used": "1", "script_version": "1.0", "tablespace_name": "INDX", "total_physical_all_mb": "32767" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN2467", "check_key_3": "SYSTEM", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "623", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1NN2467", "percent_used": "1", "script_version": "1.0", "tablespace_name": "SYSTEM", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1S48633", "check_key_3": "AUDIT_TBS", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "3", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1S48633", "percent_used": "0", "script_version": "1.0", "tablespace_name": "AUDIT_TBS", "total_physical_all_mb": "8192" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1S48633", "check_key_3": "USRINDEX", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "128", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1S48633", "percent_used": "0", "script_version": "1.0", "tablespace_name": "USRINDEX", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1S48633", "check_key_3": "UNDOTBS1", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "77", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1S48633", "percent_used": "0", "script_version": "1.0", "tablespace_name": "UNDOTBS1", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1S48633", "check_key_3": "TOOLS", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "5", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1S48633", "percent_used": "0", "script_version": "1.0", "tablespace_name": "TOOLS", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN2467", "check_key_3": "UNDOTBS1", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "24", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1NN2467", "percent_used": "0", "script_version": "1.0", "tablespace_name": "UNDOTBS1", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "O1NN2467", "check_key_3": "SYSAUX", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "pdb_tbs_check", "check_status": "OK", "current_use_mb": "628", "extract_date": "30/07/2024 08:09:06", "pdb_name": "O1NN2467", "percent_used": "1", "script_version": "1.0", "tablespace_name": "SYSAUX", "total_physical_all_mb": "65536" }, { "check_error": "", "check_key_1": "C2N48617", "check_key_2": "", "check_key_3": "", "check_key_4": "", "check_key_5": "", "check_key_6": "", "check_name": "monitoring_package", "check_status": "OK", "extract_date": "30/07/2024 08:09:06", "script_version": "1.0" } ]
Hi @Nawab , to use computername instead host youcannot use tstats and the search is slower, so try this: with perimeter.csv lookup index=* | stats count BY sourcetype ComputerName | append [ ... See more...
Hi @Nawab , to use computername instead host youcannot use tstats and the search is slower, so try this: with perimeter.csv lookup index=* | stats count BY sourcetype ComputerName | append [ | inputlookup perimeter.csv | eval count=0 | fields ComputerName sourcetype count ] | stats sum(count) AS total BY sourcetype ComputerName | where total=0 without lookup: index=* | stats count latest(_time) AS _time BY sourcetype ComputerName | eval period=if(_time<now()-3600,"previous,"latest") | stats dc(period) AS period_count values(period) AS period BY sourcetype ComputerName | where period_count=1 AND period="previous" Ciao. Giuseppe