All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a report with a table where I am showing uptime availability of various products.  Currently the table is returning only results that fall below 100%.  Makes sense overall but I need all the d... See more...
I have a report with a table where I am showing uptime availability of various products.  Currently the table is returning only results that fall below 100%.  Makes sense overall but I need all the data.  So I need results with no data to show as 100%.  For the life of me I can not figure it out.  Please all knowing Splunk gods help me.   index=my_data data.environment.application="MY APP" data.environment.environment="test" | eval estack="my_stack" | fillnull value="prod" estack data.environment.stack | where 'data.environment.stack'=estack | streamstats window=1 current=False global=False values(data.result) AS nextResult BY data.componentId | eval failureStart=if((nextResult="FAILURE" AND 'data.result'="SUCCESS"), "True", "False"), failureEnd=if((nextResult="SUCCESS" AND 'data.result'="FAILURE"), "True", "False") | transaction data.componentId, data.environment.application, data.environment.stack startswith="failureStart=True" endswith="failureEnd=True" maxpause=15m | stats sum(duration) as downtime by data.componentId | addinfo | eval uptime=(info_max_time - info_min_time)-downtime, avail=(uptime/(info_max_time - info_min_time))*100, downMins=round(downtime/60, 0) | rename data.componentId AS Component, avail AS Availability | table Component, Availability
@ITWhisperer  Changed to match format as detailed... index=blah [search index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<search>[^:]+)" | table search | dedup search] ...but new format ONLY retu... See more...
@ITWhisperer  Changed to match format as detailed... index=blah [search index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<search>[^:]+)" | table search | dedup search] ...but new format ONLY returned rows containing 92d246dd-7aac-41f7-a398-27586062e4fa [first row] and no other rows.  I removed 'dedup' but that did not help How can I include all returned items from inner search as input to outer [main] search?  
This is the full error output,   Migrating to: VERSION=9.2.0.1 BUILD=d8ae995bf219 PRODUCT=splunk PLATFORM=Linux-x86_64 ********** BEGIN PREVIEW OF CONFIGURATION FILE MIGRATION ********** Tr... See more...
This is the full error output,   Migrating to: VERSION=9.2.0.1 BUILD=d8ae995bf219 PRODUCT=splunk PLATFORM=Linux-x86_64 ********** BEGIN PREVIEW OF CONFIGURATION FILE MIGRATION ********** Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py", line 39, in <module> from splunk.rcUtils import makeRestCall, CliArgError, NoEndpointError, InvalidStatusCodeError File "/opt/splunk/lib/python3.7/site-packages/splunk/rcUtils.py", line 17, in <module> from splunk.search import dispatch, getJob, listJobs File "/opt/splunk/lib/python3.7/site-packages/splunk/search/__init__.py", line 17, in <module> from splunk.rest.splunk_web_requests import is_v2_search_enabled File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/splunk_web_requests/__init__.py", line 1, in <module> import cherrypy File "/opt/splunk/lib/python3.7/site-packages/cherrypy/__init__.py", line 76, in <module> from . import _cprequest, _cpserver, _cptree, _cplogging, _cpconfig File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpserver.py", line 6, in <module> from cherrypy.process.servers import ServerAdapter File "/opt/splunk/lib/python3.7/site-packages/cherrypy/process/__init__.py", line 13, in <module> from .wspbus import bus File "/opt/splunk/lib/python3.7/site-packages/cherrypy/process/wspbus.py", line 66, in <module> import ctypes File "/opt/splunk/lib/python3.7/ctypes/__init__.py", line 551, in <module> _reset_cache() File "/opt/splunk/lib/python3.7/ctypes/__init__.py", line 273, in _reset_cache CFUNCTYPE(c_int)(lambda: None) MemoryError Error running pre-start tasks
Hi If I understand correctly your issue is that you have some data with online access after you want that this has deleted from splunk and archived or just deleted? You have/must have to define this... See more...
Hi If I understand correctly your issue is that you have some data with online access after you want that this has deleted from splunk and archived or just deleted? You have/must have to define this based on retention time as say e.g. in legislation?  There is (at least) one excellent .conf presentation how splunk handles data inside it. You should read this https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-when-and-where-to-roll-data.pdf. Even it's already kept couple of years ago, it's still mostly valid. Probably most important thing what is missing from it is Splunk Smart Store, but based on your configuration you are not used it. Usually it hard or almost impossible to get data removed based on your exact retention times. As @gcusello already said and you could see it also on that presentation, events are stored into buckets and that bucket expires and removed after all events inside it have expired. For that reason there could be much older data in buckets than you want. How you could try to avoid it? Probably the only way how you could try to manage it, is ensure that all buckets contains only events from one day. Unfortunately this is not possible for 100% of cases. Usually there are situation when your ingested data could contains events from several days (e.g. you start with new source system and ingest also old logs etc.). One thing what you could try is set hot bucket parameters maxHotSpanSecs = <positive integer> * Upper bound of timespan of hot/warm buckets, in seconds. * This is an advanced setting that should be set with care and understanding of the characteristics of your data. * Splunkd applies this limit per ingestion pipeline. For more information about multiple ingestion pipelines, see 'parallelIngestionPipelines' in the server.conf.spec file. * With N parallel ingestion pipelines, each ingestion pipeline writes to and manages its own set of hot buckets, without taking into account the state of hot buckets managed by other ingestion pipelines. Each ingestion pipeline independently applies this setting only to its own set of hot buckets. * If you set 'maxHotBuckets' to 1, splunkd attempts to send all events to the single hot bucket and does not enforce 'maxHotSpanSeconds'. * If you set this setting to less than 3600, it will be automatically reset to 3600. * NOTE: If you set this setting to too small a value, splunkd can generate a very large number of hot and warm buckets within a short period of time. * The highest legal value is 4294967295. * NOTE: the bucket timespan snapping behavior is removed from this setting. See the 6.5 spec file for details of this behavior. * Default: 7776000 (90 days) maxHotIdleSecs = <nonnegative integer> * How long, in seconds, that a hot bucket can remain in hot status without receiving any data. * If a hot bucket receives no data for more than 'maxHotIdleSecs' seconds, splunkd rolls the bucket to warm. * This setting operates independently of 'maxHotBuckets', which can also cause hot buckets to roll. * A value of 0 turns off the idle check (equivalent to infinite idle time). * The highest legal value is 4294967295 * Default: 0 With those you could try to keep each hot buckets open max one day. But still if/when you are ingesting events which are not from current day those will mesh this! But unless you haven't any legal reason to do this I don't propose you to close every bucket containing max events from one day. This could lead you to another issues e.g. with bucket counts in cluster, longer restart etc. times. r. Ismo 
What does the whole query should look like? 
  This format of the query params q=search%20index%3D_audit%20%5B%20%7C%20makeresults%20%7C%20eval%20e%3D1710924016000%2Cl%3D1710927616000%2C%20earliest%3De%2F1000%2C%20latest%3Dl%2F1000%20%7C%20fi... See more...
  This format of the query params q=search%20index%3D_audit%20%5B%20%7C%20makeresults%20%7C%20eval%20e%3D1710924016000%2Cl%3D1710927616000%2C%20earliest%3De%2F1000%2C%20latest%3Dl%2F1000%20%7C%20fields%20earliest%20latest%20%5D is what is required to search the _audit index for 1 hour, so if you can construct the subsearch and set the e and l parameters as in  %5B%20%7C%20makeresults%20%7C%20eval%20e%3D1710924016000%2Cl%3D1710927616000%2C%20earliest%3De%2F1000%2C%20latest%3Dl%2F1000%20%7C%20fields%20earliest%20latest%20%5D it will do this search index=_audit [ | makeresults | eval e=1710924016000,l=1710927616000, earliest=e/1000, latest=l/1000 | fields earliest latest ]  
If you can change the URL parameters then you can create a subsearch that takes the ms parameters as parameters e and l. In the subsearch you can do the division and rename the fields earliest and la... See more...
If you can change the URL parameters then you can create a subsearch that takes the ms parameters as parameters e and l. In the subsearch you can do the division and rename the fields earliest and latest. When passed out of the subsearch they will be treated as earliest and latest
I can't change the values that I paste into the URL. I can change the parameters in the query, but not the values. I have that number of milliseconds and can't manipulate it.
Can you change the URL in any way or is that all you have to make a search and there's no other component or processed in the middle
One old post where has presented some kind of workaround https://community.splunk.com/t5/Monitoring-Splunk/Why-splunkd-cannot-read-input-files-created-in-source-folder/m-p/184598 Maybe this helps o... See more...
One old post where has presented some kind of workaround https://community.splunk.com/t5/Monitoring-Splunk/Why-splunkd-cannot-read-input-files-created-in-source-folder/m-p/184598 Maybe this helps or not?
I can paste those values as URL parameters. So, I can have this URL as input: https://my.splunkcloud.com/en-GB/app/my_app/search?q=search%20index%3Dkubernetes_app%20env%3Dproduction%20service%3Dmy-s... See more...
I can paste those values as URL parameters. So, I can have this URL as input: https://my.splunkcloud.com/en-GB/app/my_app/search?q=search%20index%3Dkubernetes_app%20env%3Dproduction%20service%3Dmy-service&display.page.search.mode=smart&dispatch.sample_ratio=1&earliest=1710525600000&latest=1710532800000
So, it depends on how you are getting these values and including them in your search. Please provide more details. (You may be able to use Splunk to preprocess your values in a subsearch, but it depe... See more...
So, it depends on how you are getting these values and including them in your search. Please provide more details. (You may be able to use Splunk to preprocess your values in a subsearch, but it depends where they come from.)
Your search assume that requestid has already been extracted into a field in the index. If you want to just do a string search based on the requestids, try something like this index=blah [search ind... See more...
Your search assume that requestid has already been extracted into a field in the index. If you want to just do a string search based on the requestids, try something like this index=blah [search index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<search>[^:]+)" | table search | dedup search] The field search (and query) are given special treatment for subsearches in that the field name is not return, just the contents of the field
No, it's impossible to change the process, I don't control it. Is there any function in Splunk that can do this conversion? When I try earliest=timestamp/1000 it doesn't work.
It depends on how you are "auto-generating" them - you could possibly change the process that generates them to divide by 1000?
OK. Because I think you might be misunderstanding something. CIM is just a definition of fields which should be either present directly in your events or defined as calculated fields or automatic lo... See more...
OK. Because I think you might be misunderstanding something. CIM is just a definition of fields which should be either present directly in your events or defined as calculated fields or automatic lookups. So the way to go would be not to fiddle with the definition of the datamodel to fit the data but rather the other way around - modify the data to fit the datamodel). There is already a good candidate for the "location" field I showed already - the dvc_zone field - you can either fill it in search time or during index-time. Or even set it "statically" on the input level by using the _meta option.
You could output their choices to a csv store - these can be made user specific with the create_context argument outputlookup - Splunk Documentation
If it works then it is OK
As I wrote before - "I assume you checked the name for this particular Event Log (the name of the stanza must match the "Full Name" property from the EventLog properties page)" Especially the par... See more...
As I wrote before - "I assume you checked the name for this particular Event Log (the name of the stanza must match the "Full Name" property from the EventLog properties page)" Especially the part in the parentheses is important. And yes, naming of the Event Logs can be a bit confusing sometimes. (You can of course get the Event Log name with a quick PowerShell as well without the need to click through the Event Viewer).
After the bin command, period_start will be an epoch (unix) time aligned to the start of the hour. In order to get a match, you should parse / reformat / convert the time from your lookup into a simi... See more...
After the bin command, period_start will be an epoch (unix) time aligned to the start of the hour. In order to get a match, you should parse / reformat / convert the time from your lookup into a similarly aligned unix time. Then the stats command can match against the time and the value