All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ayomotukoya As @richgalloway  said, lastchanceindex is a Pre-defined in Splunk Cloud. Accepts events sent to a non-existing index. So please create index first before onboarding data to Splunk Cloud... See more...
@ayomotukoya As @richgalloway  said, lastchanceindex is a Pre-defined in Splunk Cloud. Accepts events sent to a non-existing index. So please create index first before onboarding data to Splunk Cloud.  lastchanceindex:- [Input Y] index = $%^&*  
I have a reliable base query to find events containing the information I want. I built a rex using the field extractor, but applying the rex expression in a search does not yield any results, the va... See more...
I have a reliable base query to find events containing the information I want. I built a rex using the field extractor, but applying the rex expression in a search does not yield any results, the values(gts_percent) column is always blank Sample query: index="june_analytics_logs_prod" $serial$ log_level=info message=*hardware_controller*| rex field=message "(?=[^G]*(?:GTS weight:|G.*GTS weight:))^(?:[^\.\n]*\.){7}\d+\w+,\s+\w+:\s+(?P<gts_percent>\d+)"| convert rmunit(gts_percent)| chart values(gts_percent) by _time   Sample raw_ result : {"bootcount":8,"device_id":"XXX","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC","local_time":"2025-02-20T00:47:48.124-06:00", "location":{"city":"XX","country":"XX","latitude":XXX,"longitude":XXX,"state":"XXX"}, "log_level":"info","message":"martini::hardware_controller: GTS weight: 17.05kg, tare weight: 8.1kg, net weight: 8.95kg, fill weight: 6.8kg, percent: 100%\u0000", "model_number":"XXX","sequence":403659,"serial":"XXX","software_version":"2.3.0.276","ticks":0,"timestamp":1740034068,"timestamp_ms":1740034068124}   I am trying to extract the bold value in the raw, Where is my rex messing up?
There should be a bulletin message saying an event was put in lastchanceindex because the intended index doesn't exist.  Look for and correct the intended index name on the syslog server.
Hi @livehybrid thanks for the links. I'll add more details about the batches. They can be 1min in length to several hours. They are not regular in length unfortunately, it depends on the process and... See more...
Hi @livehybrid thanks for the links. I'll add more details about the batches. They can be 1min in length to several hours. They are not regular in length unfortunately, it depends on the process and numbers etc. There may be several batches in a day too, up to 50 on some days.   Looking at: https://docs.splunk.com/Documentation/Splunk/9.4.0/Knowledge/Defineatime-basedlookupinSplunkWeb If we pre-set a lookahear time, this could be too short and not give an ID or too big and give mutiple IDs?   Looking at: https://community.splunk.com/t5/Splunk-Search/How-to-configure-a-time-based-lookup-Temporal-lookup/m-p/367273 I can do a search for a single result using | inputlookup and | addinfo, that works fine. It's doing this on a FOR loop for each result that I'm stuck with.   I've tried this, but feels very inefficient Add a column with just the date Lookup all ID for the date Use mvexpand to split multiple IDs into single events Lookup for start and finish times for id Where to filter on _time between start and finish
Try removing the back ticks when you are searching for a macro. Check the permissions on the macro Use <ctrl><shft>E while your cursor is in the search box to expand the macro to check it is doing ... See more...
Try removing the back ticks when you are searching for a macro. Check the permissions on the macro Use <ctrl><shft>E while your cursor is in the search box to expand the macro to check it is doing what you expect
My earlier reply was "marked as spam" by the message board. Let me try again. Thank you for the reply. That is the one thing I checked and double checked in my attempt to fix my problem (the events ... See more...
My earlier reply was "marked as spam" by the message board. Let me try again. Thank you for the reply. That is the one thing I checked and double checked in my attempt to fix my problem (the events data do not reach my Splunk Cloud instance)> But that may not be the problem as the message below (I removed data payload and hostnames for sanitary reasons) indicates that an index is provided:  02-21-2025 01:06:04.001 -0500 WARN TcpOutputProc [2061704 indexerPipe] - Pipeline data does not have indexKey. [_path] = /app/splunk/etc/apps/TA-json-modinput/bin/nix_input.py\n[python.version] = python3\n[_raw] = </data><done /></event><event stanza="nix_input://ni" unbroken="1"><source>nix_input://ni</source><sourcetype>hits:unix:hosts</sourcetype><index>test</index><data>{"hostname":"......",......}\n[_meta] = timestamp::none punct::"</><_/></><_=\"://\"_=\"\"><>://</><>::</><></><>{\"\":\""\n[_stmid] = GUsvaYoWsFPrNDD.H\n[MetaData:Source] = source::nix_input\n[MetaData:Host] = host::......\n[MetaData:Sourcetype] = sourcetype::nix_input\n[_linebreaker] = _linebreaker\n[_nfd] = _nfd\n[_charSet] = UTF-8\n[_time] = 1740117963\n[_conf] = source::nix_input|host::......|nix_input|28\n[_channel] = 28\n Much appreciated!
Thank you much for the help. That is one thing I checked and double checked in my attempts of trying to fix the problem. But it may not be the case. In fact, the message shows that an index is prov... See more...
Thank you much for the help. That is one thing I checked and double checked in my attempts of trying to fix the problem. But it may not be the case. In fact, the message shows that an index is provided in the event forwarding (I removed the payload and hostname for sanitary reasons):   02-21-2025 01:06:04.001 -0500 WARN TcpOutputProc [2061704 indexerPipe] - Pipeline data does not have indexKey. [_path] = /app/splunk/etc/apps/TA-json-modinput/bin/nix_input.py\n[python.version] = python3\n[_raw] = </data><done /></event><event stanza="nix_input://ni" unbroken="1"><source>nix_input://ni</source><sourcetype>hits:unix:hosts</sourcetype><index>test</index><data>{"hostname":"......",......}\n[_meta] = timestamp::none punct::"</><_/></><_=\"://\"_=\"\"><>://</><>::</><></><>{\"\":\""\n[_stmid] = GUsvaYoWsFPrNDD.H\n[MetaData:Source] = source::nix_input\n[MetaData:Host] = host::......\n[MetaData:Sourcetype] = sourcetype::nix_input\n[_linebreaker] = _linebreaker\n[_nfd] = _nfd\n[_charSet] = UTF-8\n[_time] = 1740117963\n[_conf] = source::nix_input|host::splunkhf-prod02|nix_input|28\n[_channel] = 28\n   Since this is actually a WARN message, now I suspect if this is not the reason that my events data do not get into my Splunk Cloud instance (the TA is run on a heavy forwarder). Much appreciated!
Unfortunately, older versions are not Cloud compatible. 
Hi @dataisbeautiful  Do the timings of batches overlap? Presumably if you had a batch starting each day at midnight, the batch you would be looking for would be the previous midnight? If that is th... See more...
Hi @dataisbeautiful  Do the timings of batches overlap? Presumably if you had a batch starting each day at midnight, the batch you would be looking for would be the previous midnight? If that is the case then you may be able to look at using a time-based lookup, just using the start time of the batches as the lookup field. This should work by returning the batch number where the _time of the event is after the start_time for that batch (and less than the start_time of the next batch) - if that makes sense? Have a look at: https://community.splunk.com/t5/Splunk-Search/How-to-configure-a-time-based-lookup-Temporal-lookup/m-p/367273 https://docs.splunk.com/Documentation/Splunk/9.4.0/Knowledge/Defineatime-basedlookupinSplunkWeb Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will   
We have successfully  ingested from an AWS SQS queue guardduty logs Its structured JSON , but the extracted records are all predicated with a 'BodyJson' . A workaround for field aliases and extract... See more...
We have successfully  ingested from an AWS SQS queue guardduty logs Its structured JSON , but the extracted records are all predicated with a 'BodyJson' . A workaround for field aliases and extractions is to use that as a predicate on the local/props.conf eg EVAL-dest_name = if('BodyJson.detail.service.resourceRole'="TARGET" AND 'BodyJson.detail.resource.resourceType'="AccessKey", 'BodyJson.detail.resource.accessKeyDetails.userName', null()) But thats Pretty messy and will need maintaining I tried to flatten out using a props.conf FIELDALIAS-BodyJsonremove = BodyJson.* as * But that didnt work Has anyone another soln, other than local/props.conf . Is there something in the aws_sqs_tasks.conf (inputs) that can flatten the json to the format the TA for Amazon expects ? Thanks
I have a dashboard in an app that uses a .js file. I'd like to understand how to modify it, given that I'm on Splunk Cloud. I've already searched my platform, but I can't find anything like add-ons f... See more...
I have a dashboard in an app that uses a .js file. I'd like to understand how to modify it, given that I'm on Splunk Cloud. I've already searched my platform, but I can't find anything like add-ons for .js files. However, I'm sure that my file is located at it-IT/static/app/app/yourfilename.js. I'd also like to know if, after modifying my file, I need to perform any refresh.
Yuanliu,  You were correct.  During an audit of the PrintMon inputs, we discovered that the system\default configuration on all servers was disabling the necessary inputs on the print server. After ... See more...
Yuanliu,  You were correct.  During an audit of the PrintMon inputs, we discovered that the system\default configuration on all servers was disabling the necessary inputs on the print server. After modifying and consolidating the inputs from all servers to the print server, the print service\operational log, including Event ID 307 with the "total_pages" field, is now being collected correctly. 
Thank You for Your answer. I know about this option. But it is not suitable for every case.  
Hi all I am trying to append data to results based on a file. Example temperature and pressure are stored at 1 sample per minute all the time. The times when a batch was in production is logged in ... See more...
Hi all I am trying to append data to results based on a file. Example temperature and pressure are stored at 1 sample per minute all the time. The times when a batch was in production is logged in a look up file. Batch IDs are generated and stored after logging of sensor values to Splunk. Base search: index=ndx sourcetype=srctp (sensor=temperature OR sensor=pressure) earliest=-1d@d latest=@d | table _time sensor value Result: _time sensor value ... temperature 75 ... pressure 100   Look up file has 3 columns, the start and finish time for a batch and the ID startTime finishTime batchID ... ... b1 ... ... b2 ... ... b3   For each row in the result table of the base search, I want to append the batch ID to give _time sensor value batchID ... temperature 75 b2 ... pressure 100 b2   I have tried with | lookup batch.csv _time >= startTime, _time <= finishTime OUTPUTNEW batchID The lookup needs to find a batch where the value of _time >= startTime AND _time<=finishTime I can't see anything for lookup that works with this sort of conditions. Only where fields are a direct match. Any ideas would be appreciated, thanks in advance.
When there are too many items to display in the legend, you should see a clickable option to "see all" and then that will open up a view below the chart with a data table containing all items.  
Hi According to the documentation https://docs.splunk.com/observability/en/gdi/integrations/cloud-azure.html#cloud-azure the resource with the type microsoft.cache/redis are included. It is related ... See more...
Hi According to the documentation https://docs.splunk.com/observability/en/gdi/integrations/cloud-azure.html#cloud-azure the resource with the type microsoft.cache/redis are included. It is related with the Redis instances  of Premium SKU in Azure. And I definitely can look at the metrics of such objects. But what about resources with the type microsoft.cache/redisenterprise  (It is relates with the Redis instances of Enterprise SKU in Azure). While creating or changing parameters for Azure integration in Splunk Observability Cloud I could not find type microsoft.cache/redisenterprise in the list of available types. Is this resource type included  in the Azure Integration? Thank You
Hello, I used to use CSS Style custom values to set specific width : <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #Panel1{width:15% !im... See more...
Hello, I used to use CSS Style custom values to set specific width : <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #Panel1{width:15% !important;} #Panel2{width:85% !important;} </style> </html> </panel> <panel id="Panel1">....</panel> <panel id="Panel2">....</panel> </row> Still since last week update (9.xx) this code doesn't work anymore and my panels came back to normal (50/50 or 33/33/33) etc... Any thought about the new code it requires to fix the width/height please?  I can't afford to switch all my dashboard into Dashboard studio format for the coming weeks... Thank you by advance
Hello Giuseppe, Thank you so much for your quick response. That solved my issue for now.
Hi. Working with dashoards I found that I can put legen only at the bottom of the chart. And this is described here https://docs.splunk.com/observability/en/data-visualization/charts/chart-options.... See more...
Hi. Working with dashoards I found that I can put legen only at the bottom of the chart. And this is described here https://docs.splunk.com/observability/en/data-visualization/charts/chart-options.html#show-on-chart-legend "This option lets you specify a dimension to be displayed in a legend below the chart." But if I have dimensions for several objects on the charts I can see only one or two names in the legend, even If I have 3 or more. I do not have place for all of them. I can open chart in full screen asn still cannot observe every name in the legend. Can I scroll all the names on the legend somehow to make visible all of them? Is it possible to put legends on the right side of the chart?   Thank You