All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is exactly what i am looking for - however for some reason i am not getting any values for the field "usage_lastest_hour"  - any idea why this field is not displaying results? All the others are... See more...
This is exactly what i am looking for - however for some reason i am not getting any values for the field "usage_lastest_hour"  - any idea why this field is not displaying results? All the others are displaying as expected with the search you provided.
It's added to fixed issues  (SPL-248188, SPL-248140). https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You may also want to checkout https://community.splunk.com/t5/Kn... See more...
It's added to fixed issues  (SPL-248188, SPL-248140). https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You may also want to checkout https://community.splunk.com/t5/Knowledge-Management/Slow-indexer-receiver-detection-capability/m-p/683768#M9963
9.1.3/9.2.1 onwards slow indexer/receiver detection capability is fully functional now (SPL-248188, SPL-248140).   https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You c... See more...
9.1.3/9.2.1 onwards slow indexer/receiver detection capability is fully functional now (SPL-248188, SPL-248140).   https://docs.splunk.com/Documentation/Splunk/9.2.1/ReleaseNotes/Fixedissues You can enable it on forwarding side in outputs.conf maxSendQSize = <integer> * The size of the tcpout client send buffer, in bytes. If tcpout client(indexer/receiver connection) send buffer is full, a new indexer is randomly selected from the list of indexers provided in the server setting of the target group stanza. * This setting allows forwarder to switch to new indexer/receiver if current indexer/receiver is slow. * A non-zero value means that max send buffer size is set. * 0 means no limit on max send buffer size. * Default: 0 Additionally 9.1.3/9.2.1 and above will correctly log target ipaddress causing tcpout blocking.   WARN AutoLoadBalancedConnectionStrategy [xxxx TcpOutEloop] - Current dest host connection nn.nn.nn.nnn:9997, oneTimeClient=0, _events.size()=20, _refCount=2, _waitingAckQ.size()=4, _supportsACK=1, _lastHBRecvTime=Thu Jan 20 11:07:43 2024 is using 20214400 bytes. Total tcpout queue size is 26214400. Warningcount=20   Note: This config works correctly starting 9.1.3/9.2.1. Do not use it with 9.2.0/9.1.0/9.1.1/9.1.2( there is incorrect calculation https://community.splunk.com/t5/Getting-Data-In/Current-dest-host-connection-is-using-18446603427033668018-bytes/m-p/678842#M113450).
Hi  How do i change the max column, in readable format like 40 mins , 30 mins or 1 hrs     
I tried the search, but not getting the max number   
During graceful indexer/HF restart/stop (basically where ever splunktcp is configured) if you see last entries in metrics.log before splunk finally stops.  Where splunktcpin queue (name=splunktcpin)... See more...
During graceful indexer/HF restart/stop (basically where ever splunktcp is configured) if you see last entries in metrics.log before splunk finally stops.  Where splunktcpin queue (name=splunktcpin) shows current_size, largest_size, smallest_size has same value( but parsingqueue to indexqueue none blocked), TcpInputProcessor fails to drain splunktcpin queue despite parsingqueue or indexqueue are empty.    02-18-2024 00:54:28.370 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:54:28.370 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=7, current_size=40, largest_size=40, smallest_size=0 02-18-2024 00:54:28.368 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:54:28.368 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=7, current_size=40, largest_size=40, smallest_size=0 02-18-2024 00:53:57.364 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:53:57.364 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:57.362 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:53:57.362 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:26.372 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=1507 02-18-2024 00:53:26.372 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:53:26.370 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=1148 02-18-2024 00:53:26.370 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:55.371 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1507, largest_size=1507, smallest_size=0 02-18-2024 00:52:55.371 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:55.369 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, blocked=true, max_size_kb=500, current_size_kb=499, current_size=1148, largest_size=1148, smallest_size=0 02-18-2024 00:52:55.369 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:24.397 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=splunktcpin, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=30, smallest_size=0 02-18-2024 00:52:24.396 +0000 INFO Metrics - group=queue, ingest_pipe=1, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0 02-18-2024 00:52:24.380 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=splunktcpin, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=16, smallest_size=0 02-18-2024 00:52:24.380 +0000 INFO Metrics - group=queue, ingest_pipe=0, name=indexqueue, max_size_kb=10240, current_size_kb=0, current_size=0, largest_size=1, smallest_size=0   During graceful shutdown pipeline processors are expected to drain the queue. This issue is fixed in 9.2.1 and 9.1.4. 
| chart max('event.Properties.duration') by event.Properties.endpoint Something like this?
As @PickleRick said, your question is unclear on a key element: Desired output because your first search groups by three fields (host, index, and sourcetype) whereas the last search can only give one... See more...
As @PickleRick said, your question is unclear on a key element: Desired output because your first search groups by three fields (host, index, and sourcetype) whereas the last search can only give one of the three (host).  Does this mean you want to give each host the same usage_lastest_hour no matter which index or sourcetype the first search output come from?  In that case, you can do something like     | tstats count where index=* by host, index, sourcetype | append [search (index=_internal host=splunk_shc source=*license_usage.log* type=Usage) | stats sum(b) as Usage by h | eval Usage=round(Usage/1024/1024/1024,2) | rename h as host, Usage as usage_lastest_hour] | stats values(count) as events_latest_hour values(usage_lastest_hour) as usage_lastest_hour by host, index, sourcetype | sort - events_latest_hour, usage_lastest_hour     Note: There can only be one primary sort order.  I choose events_latest_hour as it appears to be the most logical. The addtotal command does nothing in either search; Total value is identical to the singular numeric field in each.  So, I scraped it.
Yes i trying to find out max duration and the endpoint which is associated with  event.Properties.endpoint event.Properties.duration.
That'll depends on what you want to show for your risk score.  Do you want to show max?  Do you want to show avg?  Is there a groupby field you want to use? Note the excerpted examples from the docu... See more...
That'll depends on what you want to show for your risk score.  Do you want to show max?  Do you want to show avg?  Is there a groupby field you want to use? Note the excerpted examples from the document are very specific to the problem the examples are trying to illustrate.  It is not a substitute for you to describe your desired output.  If you don't tell people, volunteers would have no way to read your mind. In the simplest form, you can experiment with something like   | chart avg('event.Properties.riskScore') max('event.Properties.riskScore') min('event.Properties.riskScore') stdev('event.Properties.riskScore')   But you already did this.  So, what is your desired output?  Alternatively, what is the use case you are trying to apply?  What is the business problem you are trying to solve/illustrate using this dashboard?
Hi Thanks for the responce  yes, i have gone through aggregate function, could you please help how to implement in the code  Exp 1 -  chart eval(avg(size)/max(delay)) AS ratio BY host user   OR   ... See more...
Hi Thanks for the responce  yes, i have gone through aggregate function, could you please help how to implement in the code  Exp 1 -  chart eval(avg(size)/max(delay)) AS ratio BY host user   OR      timechart eval(round(avg(cpu_seconds),2)) BY processor      
max(), avg() and stdev() are all aggregation functions which you can include on a stats command in your search https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Aggregatefunctions ... See more...
max(), avg() and stdev() are all aggregation functions which you can include on a stats command in your search https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Aggregatefunctions  
Hi  I am not sure about this value risk score.  How do i create dashboard tile for this fields  
Seems like a glaring omission and no answer in 9 months.  
@marnallHas better eyes than me and spotted the mix of italics and non-italics in the bracketed text.  The final regex likely will be a combination of our suggestions.
I would highly recommend the website https://regex101.com/ as it allows you to see previews of your regex extractions as you write them.  This regex might work: on the remote host:\n\n(?<Vendor>[^\... See more...
I would highly recommend the website https://regex101.com/ as it allows you to see previews of your regex extractions as you write them.  This regex might work: on the remote host:\n\n(?<Vendor>[^\[\s]*)\s(?<Software>[^\[\s]*)\s*\[version\s(?<Version>[^\]]*)\]\s\[installed on (?<Date>[^\]]*)\]  
This regular expression works in regex101.com using the sample data. | rex field=pluginText "host:\s+(?<vendorSoftware>.+?)\s+\[(?<version>[^\]]+)] \[(?<installedDate>[^\]]+)" It looks for the "hos... See more...
This regular expression works in regex101.com using the sample data. | rex field=pluginText "host:\s+(?<vendorSoftware>.+?)\s+\[(?<version>[^\]]+)] \[(?<installedDate>[^\]]+)" It looks for the "host" introductory text and skips the spaces which follow.  The next set of text (terminated by whitespace before a left bracket) is the software name.  The text in the two sets of brackets become the version and date, respectively.
Unfortunately you'd have to change the python code to do this, at least for the timer app. If you open the timer_connector.py in $SOAR_HOME$/apps/timer_*/, then it formats the event name with this fu... See more...
Unfortunately you'd have to change the python code to do this, at least for the timer app. If you open the timer_connector.py in $SOAR_HOME$/apps/timer_*/, then it formats the event name with this function: def _format_event_name(self): config = self.get_config() event_name = self._handle_py_ver_compat_for_input_str(config['event_name']) iso_now = datetime.datetime.now(pytz.utc).isoformat() label_name = config.get('ingest', {}).get('container_label', '') event_name = re.sub( r'(^|[^0-9a-zA-Z]+)(\$now)($|[^0-9a-zA-Z]+)', r'\g<1>{}\g<3>'.format(iso_now), event_name ) the time for the $now token is obtained from datetime.datetime.now(pytz.utc).isoformat() , explicitly setting the timezone to UTC. I have not looked for the code writing the activity log but would assume it is the same.
Looks like a networking issue. Your SOAR instance cannot find a network route to 192.168.208.144. Some things you could try are: Ping or interact with that ip address via any protocol, to see if i... See more...
Looks like a networking issue. Your SOAR instance cannot find a network route to 192.168.208.144. Some things you could try are: Ping or interact with that ip address via any protocol, to see if it connects outside of SOAR. Ping or interact with another IP address via any protocol, to see if networking is working on the SOAR machine. Check that your SOAR instance and 192.168.208.144 are in the same subnet, or have switches that can carry traffic between them.
I also expected the LOG field to be extracted.  Were the changes to props/transforms installed on the first full Splunk instance the sees the data?  Was that instance restarted?  Is the screenshot sh... See more...
I also expected the LOG field to be extracted.  Were the changes to props/transforms installed on the first full Splunk instance the sees the data?  Was that instance restarted?  Is the screenshot showing new data (since the restart)?