All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I can't see any sourcetype parsing issues.  I only see old bugs from testing the app. But these should not be necessary for us.
Hi @gcusello  I checked the macros again and they contain the same values you listed earlier. Unless we've missed something, very basic they look in order. I've attached the screenshots of the macr... See more...
Hi @gcusello  I checked the macros again and they contain the same values you listed earlier. Unless we've missed something, very basic they look in order. I've attached the screenshots of the macro configs. I believe they are correct? They were all self populated when the App was installed. There are no details about special settings for the index which I created via the WebUI with the suggested default index name. Thanks again  
Instead of dealing with the messiness of a natural language, it might be better to use standard notation of duration, like | fieldformat max(event.Properties.duration) = tostring('max(event.Properti... See more...
Instead of dealing with the messiness of a natural language, it might be better to use standard notation of duration, like | fieldformat max(event.Properties.duration) = tostring('max(event.Properties.duration)', "duration") Instead of 40 mins , 30 mins or 1 hrs, you get 00:40:00, 00:30:00, 01:00:00, and so on.
Hi @slider8p2023, did you customized the three macros that are present in this app to populate the lookups? they are: `data_model_wrangler_index` Index to which summary data will be sent Defaul... See more...
Hi @slider8p2023, did you customized the three macros that are present in this app to populate the lookups? they are: `data_model_wrangler_index` Index to which summary data will be sent Default: data_model_wrangler `datamodel_wrangler_data_model_list` Comma separated list of data models to monitor Default: Authentication, Change, Change_Analysis, DLP, Databases, Email, Endpoint, Intrusion_Detection, Malware, Network_Resolution, Network_Sessions, Network_Traffic, Web `data_model_wrangler_health_review_lookup` The name of the lookup containing review information Default: data_model_wrangler_health_review.csv Put attention especially to the first  that contains the list of indexer to check. Ciao. Giuseppe
Hi @CarolinaHB, there are two solutions that depend on the  location of the monitoring perimeter: if you have a lookup containing the list of each app that should be present in each host (called e.... See more...
Hi @CarolinaHB, there are two solutions that depend on the  location of the monitoring perimeter: if you have a lookup containing the list of each app that should be present in each host (called e.g. app_perimeter.csv and containing at least two fields: host and application), you could run something like this: <your_search> | stats count BY host application | append [ | inputlookup app_perimeter.csv | eval count=0 | fields host application count ] | stats sum(count) AS total BY host application | eval status=if(total=0,"Missing","Present") | table host application status If instead you don't have this lookup and you want to compare results e.g. of the last 24 hours with the results of the last 30 days, you could run something like this: <your_search> earliest=30d latest=now | eval period=if(_time>now()-86400,"Last day","Previous days") | stats dc(period) AS period_count values(period) AS period BY host application | eval status=if(period_count=1 AND period="Previous days","Missing","Present") | table host application status Ciao. Giuseppe  
Thank you for the insights @hrawat . I believe this should be part of Monitoring Console as well to identify the queue behavior.   Thanks, Tejas.
Good Morning,  I'm working in a query to see which application is missing on each host.  Can you help me, please? For example Host     application             Guardicore  Host1 cortex         ... See more...
Good Morning,  I'm working in a query to see which application is missing on each host.  Can you help me, please? For example Host     application             Guardicore  Host1 cortex               Tenable                Trend Micro Host2 cortex              Tenable I need, it to show me what is missing In its example Guardicore y tenable   Regardes
Got it to work - thank you
Hi  We've setup the Data Model Wrangler App in our on-prem Search Head. We're running Splunk Core 9.0.3 and ES 7.0.1  The latest SA-cim-validator and CIM App are installed as per the installation n... See more...
Hi  We've setup the Data Model Wrangler App in our on-prem Search Head. We're running Splunk Core 9.0.3 and ES 7.0.1  The latest SA-cim-validator and CIM App are installed as per the installation notes. These apps are working as expected showing results from the validator app. I created an index from the Splunk WebUI visible under the DM Wrangler App called data_model_wrangler. We've scheduled the 3 saved searches that come with the App as per the instructions. We only see results out of 1 from the 3 saved searches from the DM Wrangler App.  Also the index created is empty. This being  data_model_wrangler_dm_index_sourcetype_field  The two other saved searches are: data_model_wrangler_field_quality data_model_wrangler_mapping_quality with errors: No results to summary index. All saved searches and indexes are enabled. Can anyone please suggest where we've gone wrong in setting this up? @nvonkorff  Thanks in advance
This is exactly what i am looking for - however for some reason i am not getting any values for the field "usage_lastest_hour"  - any idea why this field is not displaying results? All the others are... See more...
This is exactly what i am looking for - however for some reason i am not getting any values for the field "usage_lastest_hour"  - any idea why this field is not displaying results? All the others are displaying as expected with the search you provided.
It's added to fixed issues  (SPL-248188, SPL-248140). https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You may also want to checkout https://community.splunk.com/t5/Kn... See more...
It's added to fixed issues  (SPL-248188, SPL-248140). https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You may also want to checkout https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/683768#M9963
9.1.3/9.2.1 onwards slow indexer/receiver detection capability is fully functional now (SPL-248188, SPL-248140).   https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You c... See more...
9.1.3/9.2.1 onwards slow indexer/receiver detection capability is fully functional now (SPL-248188, SPL-248140).   https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You can enable it on forwarding side in outputs.conf maxSendQSize = <integer> * The size of the tcpout client send buffer, in bytes. If tcpout client(indexer/receiver connection) send buffer is full, a new indexer is randomly selected from the list of indexers provided in the server setting of the target group stanza. * This setting allows forwarder to switch to new indexer/receiver if current indexer/receiver is slow. * A non-zero value means that max send buffer size is set. * 0 means no limit on max send buffer size. * Default: 0 Additionally 9.1.3/9.2.1 and above will correctly log target ipaddress causing tcpout blocking.   WARN AutoLoadBalancedConnectionStrategy [xxxx TcpOutEloop] - Current dest host connection nn.nn.nn.nnn:9997, oneTimeClient=0, _events.size()=20, _refCount=2, _waitingAckQ.size()=4, _supportsACK=1, _lastHBRecvTime=Thu Jan 20 11:07:43 2024 is using 20214400 bytes. Total tcpout queue size is 26214400. Warningcount=20   Note: This config works correctly starting 9.1.3/9.2.1. Do not use it with 9.2.0/9.1.0/9.1.1/9.1.2( there is incorrect calculation https://community.splunk.com/t5/Getting-Data-In/Current-dest-host-connection-is-using-18446603427033668018-bytes/m-p/678842#M113450).
Hi  How do i change the max column, in readable format like 40 mins , 30 mins or 1 hrs     
I tried the search, but not getting the max number   
During graceful indexer/HF restart/stop (basically where ever splunktcp is configured) if you see last entries in metrics.log before splunk finally stops.  Where splunktcpin queue (name=splunktcpin)... See more...
During graceful indexer/HF restart/stop (basically where ever splunktcp is configured) if you see last entries in metrics.log before splunk finally stops.  Where splunktcpin queue (name=splunktcpin) shows current_size, largest_size, smallest_size has same value( but parsingqueue to indexqueue none blocked), TcpInputProcessor fails to drain splunktcpin queue despite parsingqueue or indexqueue are empty.    02-18-2024 00:54:28.370 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:54:28.370 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=7, current_size=40, largest_size=40, smallest_size=0 02-18-2024 00:54:28.368 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:54:28.368 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=7, current_size=40, largest_size=40, smallest_size=0 02-18-2024 00:53:57.364 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:53:57.364 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:57.362 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:53:57.362 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:26.372 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:53:26.372 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:26.370 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:53:26.370 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:55.371 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=0 02-18-2024 00:52:55.371 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:55.369 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=0 02-18-2024 00:52:55.369 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:24.397 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=30, smallest_size=0 02-18-2024 00:52:24.396 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:24.380 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=16, smallest_size=0 02-18-2024 00:52:24.380 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0   During graceful shutdown pipeline processors are expected to drain the queue. This issue is fixed in 9.2.1 and 9.1.4. 
| chart max('event.Properties.duration') by event.Properties.endpoint Something like this?
As @PickleRick said, your question is unclear on a key element: Desired output because your first search groups by three fields (host, index, and sourcetype) whereas the last search can only give one... See more...
As @PickleRick said, your question is unclear on a key element: Desired output because your first search groups by three fields (host, index, and sourcetype) whereas the last search can only give one of the three (host).  Does this mean you want to give each host the same usage_lastest_hour no matter which index or sourcetype the first search output come from?  In that case, you can do something like     | tstats count where index=* by host, index, sourcetype | append [search (index=_internal host=splunk_shc source=*license_usage.log* type=Usage) | stats sum(b) as Usage by h | eval Usage=round(Usage/1024/1024/1024,2) | rename h as host, Usage as usage_lastest_hour] | stats values(count) as events_latest_hour values(usage_lastest_hour) as usage_lastest_hour by host, index, sourcetype | sort - events_latest_hour, usage_lastest_hour     Note: There can only be one primary sort order.  I choose events_latest_hour as it appears to be the most logical. The addtotal command does nothing in either search; Total value is identical to the singular numeric field in each.  So, I scraped it.
Yes i trying to find out max duration and the endpoint which is associated with  event.Properties.endpoint event.Properties.duration.
That'll depends on what you want to show for your risk score.  Do you want to show max?  Do you want to show avg?  Is there a groupby field you want to use? Note the excerpted examples from the docu... See more...
That'll depends on what you want to show for your risk score.  Do you want to show max?  Do you want to show avg?  Is there a groupby field you want to use? Note the excerpted examples from the document are very specific to the problem the examples are trying to illustrate.  It is not a substitute for you to describe your desired output.  If you don't tell people, volunteers would have no way to read your mind. In the simplest form, you can experiment with something like   | chart avg('event.Properties.riskScore') max('event.Properties.riskScore') min('event.Properties.riskScore') stdev('event.Properties.riskScore')   But you already did this.  So, what is your desired output?  Alternatively, what is the use case you are trying to apply?  What is the business problem you are trying to solve/illustrate using this dashboard?
Hi Thanks for the responce  yes, i have gone through aggregate function, could you please help how to implement in the code  Exp 1 -  chart eval(avg(size)/max(delay)) AS ratio BY host user   OR   ... See more...
Hi Thanks for the responce  yes, i have gone through aggregate function, could you please help how to implement in the code  Exp 1 -  chart eval(avg(size)/max(delay)) AS ratio BY host user   OR      timechart eval(round(avg(cpu_seconds),2)) BY processor