My earlier reply was "marked as spam" by the message board. Let me try again. Thank you for the reply. That is the one thing I checked and double checked in my attempt to fix my problem (the events ...
See more...
My earlier reply was "marked as spam" by the message board. Let me try again. Thank you for the reply. That is the one thing I checked and double checked in my attempt to fix my problem (the events data do not reach my Splunk Cloud instance)> But that may not be the problem as the message below (I removed data payload and hostnames for sanitary reasons) indicates that an index is provided: 02-21-2025 01:06:04.001 -0500 WARN TcpOutputProc [2061704 indexerPipe] - Pipeline data does not have indexKey. [_path] = /app/splunk/etc/apps/TA-json-modinput/bin/nix_input.py\n[python.version] = python3\n[_raw] = </data><done /></event><event stanza="nix_input://ni" unbroken="1"><source>nix_input://ni</source><sourcetype>hits:unix:hosts</sourcetype><index>test</index><data>{"hostname":"......",......}\n[_meta] = timestamp::none punct::"</><_/></><_=\"://\"_=\"\"><>://</><>::</><></><>{\"\":\""\n[_stmid] = GUsvaYoWsFPrNDD.H\n[MetaData:Source] = source::nix_input\n[MetaData:Host] = host::......\n[MetaData:Sourcetype] = sourcetype::nix_input\n[_linebreaker] = _linebreaker\n[_nfd] = _nfd\n[_charSet] = UTF-8\n[_time] = 1740117963\n[_conf] = source::nix_input|host::......|nix_input|28\n[_channel] = 28\n Much appreciated!
Thank you much for the help. That is one thing I checked and double checked in my attempts of trying to fix the problem. But it may not be the case. In fact, the message shows that an index is prov...
See more...
Thank you much for the help. That is one thing I checked and double checked in my attempts of trying to fix the problem. But it may not be the case. In fact, the message shows that an index is provided in the event forwarding (I removed the payload and hostname for sanitary reasons): 02-21-2025 01:06:04.001 -0500 WARN TcpOutputProc [2061704 indexerPipe] - Pipeline data does not have indexKey. [_path] = /app/splunk/etc/apps/TA-json-modinput/bin/nix_input.py\n[python.version] = python3\n[_raw] = </data><done /></event><event stanza="nix_input://ni" unbroken="1"><source>nix_input://ni</source><sourcetype>hits:unix:hosts</sourcetype><index>test</index><data>{"hostname":"......",......}\n[_meta] = timestamp::none punct::"</><_/></><_=\"://\"_=\"\"><>://</><>::</><></><>{\"\":\""\n[_stmid] = GUsvaYoWsFPrNDD.H\n[MetaData:Source] = source::nix_input\n[MetaData:Host] = host::......\n[MetaData:Sourcetype] = sourcetype::nix_input\n[_linebreaker] = _linebreaker\n[_nfd] = _nfd\n[_charSet] = UTF-8\n[_time] = 1740117963\n[_conf] = source::nix_input|host::splunkhf-prod02|nix_input|28\n[_channel] = 28\n Since this is actually a WARN message, now I suspect if this is not the reason that my events data do not get into my Splunk Cloud instance (the TA is run on a heavy forwarder). Much appreciated!
Hi @dataisbeautiful Do the timings of batches overlap? Presumably if you had a batch starting each day at midnight, the batch you would be looking for would be the previous midnight? If that is th...
See more...
Hi @dataisbeautiful Do the timings of batches overlap? Presumably if you had a batch starting each day at midnight, the batch you would be looking for would be the previous midnight? If that is the case then you may be able to look at using a time-based lookup, just using the start time of the batches as the lookup field. This should work by returning the batch number where the _time of the event is after the start_time for that batch (and less than the start_time of the next batch) - if that makes sense? Have a look at: https://community.splunk.com/t5/Splunk-Search/How-to-configure-a-time-based-lookup-Temporal-lookup/m-p/367273 https://docs.splunk.com/Documentation/Splunk/9.4.0/Knowledge/Defineatime-basedlookupinSplunkWeb Please let me know how you get on and consider accepting this answer or adding karma this answer if it has helped. Regards Will
We have successfully ingested from an AWS SQS queue guardduty logs Its structured JSON , but the extracted records are all predicated with a 'BodyJson' . A workaround for field aliases and extract...
See more...
We have successfully ingested from an AWS SQS queue guardduty logs Its structured JSON , but the extracted records are all predicated with a 'BodyJson' . A workaround for field aliases and extractions is to use that as a predicate on the local/props.conf eg EVAL-dest_name = if('BodyJson.detail.service.resourceRole'="TARGET" AND 'BodyJson.detail.resource.resourceType'="AccessKey", 'BodyJson.detail.resource.accessKeyDetails.userName', null()) But thats Pretty messy and will need maintaining I tried to flatten out using a props.conf FIELDALIAS-BodyJsonremove = BodyJson.* as * But that didnt work Has anyone another soln, other than local/props.conf . Is there something in the aws_sqs_tasks.conf (inputs) that can flatten the json to the format the TA for Amazon expects ? Thanks
I have a dashboard in an app that uses a .js file. I'd like to understand how to modify it, given that I'm on Splunk Cloud. I've already searched my platform, but I can't find anything like add-ons f...
See more...
I have a dashboard in an app that uses a .js file. I'd like to understand how to modify it, given that I'm on Splunk Cloud. I've already searched my platform, but I can't find anything like add-ons for .js files. However, I'm sure that my file is located at it-IT/static/app/app/yourfilename.js. I'd also like to know if, after modifying my file, I need to perform any refresh.
Yuanliu, You were correct. During an audit of the PrintMon inputs, we discovered that the system\default configuration on all servers was disabling the necessary inputs on the print server. After ...
See more...
Yuanliu, You were correct. During an audit of the PrintMon inputs, we discovered that the system\default configuration on all servers was disabling the necessary inputs on the print server. After modifying and consolidating the inputs from all servers to the print server, the print service\operational log, including Event ID 307 with the "total_pages" field, is now being collected correctly.
Hi all I am trying to append data to results based on a file. Example temperature and pressure are stored at 1 sample per minute all the time. The times when a batch was in production is logged in ...
See more...
Hi all I am trying to append data to results based on a file. Example temperature and pressure are stored at 1 sample per minute all the time. The times when a batch was in production is logged in a look up file. Batch IDs are generated and stored after logging of sensor values to Splunk. Base search: index=ndx sourcetype=srctp (sensor=temperature OR sensor=pressure) earliest=-1d@d latest=@d
| table _time sensor value Result: _time sensor value ... temperature 75 ... pressure 100 Look up file has 3 columns, the start and finish time for a batch and the ID startTime finishTime batchID ... ... b1 ... ... b2 ... ... b3 For each row in the result table of the base search, I want to append the batch ID to give _time sensor value batchID ... temperature 75 b2 ... pressure 100 b2 I have tried with | lookup batch.csv _time >= startTime, _time <= finishTime OUTPUTNEW batchID The lookup needs to find a batch where the value of _time >= startTime AND _time<=finishTime I can't see anything for lookup that works with this sort of conditions. Only where fields are a direct match. Any ideas would be appreciated, thanks in advance.
When there are too many items to display in the legend, you should see a clickable option to "see all" and then that will open up a view below the chart with a data table containing all items.
Hi According to the documentation https://docs.splunk.com/observability/en/gdi/integrations/cloud-azure.html#cloud-azure the resource with the type microsoft.cache/redis are included. It is related ...
See more...
Hi According to the documentation https://docs.splunk.com/observability/en/gdi/integrations/cloud-azure.html#cloud-azure the resource with the type microsoft.cache/redis are included. It is related with the Redis instances of Premium SKU in Azure. And I definitely can look at the metrics of such objects. But what about resources with the type microsoft.cache/redisenterprise (It is relates with the Redis instances of Enterprise SKU in Azure). While creating or changing parameters for Azure integration in Splunk Observability Cloud I could not find type microsoft.cache/redisenterprise in the list of available types. Is this resource type included in the Azure Integration? Thank You
Hello, I used to use CSS Style custom values to set specific width : <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #Panel1{width:15% !im...
See more...
Hello, I used to use CSS Style custom values to set specific width : <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #Panel1{width:15% !important;} #Panel2{width:85% !important;} </style> </html> </panel> <panel id="Panel1">....</panel> <panel id="Panel2">....</panel> </row> Still since last week update (9.xx) this code doesn't work anymore and my panels came back to normal (50/50 or 33/33/33) etc... Any thought about the new code it requires to fix the width/height please? I can't afford to switch all my dashboard into Dashboard studio format for the coming weeks... Thank you by advance
Hi. Working with dashoards I found that I can put legen only at the bottom of the chart. And this is described here https://docs.splunk.com/observability/en/data-visualization/charts/chart-options....
See more...
Hi. Working with dashoards I found that I can put legen only at the bottom of the chart. And this is described here https://docs.splunk.com/observability/en/data-visualization/charts/chart-options.html#show-on-chart-legend "This option lets you specify a dimension to be displayed in a legend below the chart." But if I have dimensions for several objects on the charts I can see only one or two names in the legend, even If I have 3 or more. I do not have place for all of them. I can open chart in full screen asn still cannot observe every name in the legend. Can I scroll all the names on the legend somehow to make visible all of them? Is it possible to put legends on the right side of the chart? Thank You
Hi @Karthikeya , you can override the index name before indexing, not after. If you already indexed data, you cannot move data from an index to another. The only way is reidex all the events, payi...
See more...
Hi @Karthikeya , you can override the index name before indexing, not after. If you already indexed data, you cannot move data from an index to another. The only way is reidex all the events, paying twice the license. But, why do you want to change the index? An index isn't a database table, you can have also different and etherogeneous data in the same index remember that an index is usually defined based on two parameters: retention and grant accesses. Ciao. Giuseppe
Hi @mrkhan48 , sorry but I don't understand: if the search you configured for the dropdown has no results, how it can have results after? Anyway, to avoid to start the dropdown, do not set a value ...
See more...
Hi @mrkhan48 , sorry but I don't understand: if the search you configured for the dropdown has no results, how it can have results after? Anyway, to avoid to start the dropdown, do not set a value for default or initial value. Ciao. Giuseppe
every command is giving results thats not the problem , problem the macro which is used in this search when i am searching this macro it is giving me no result.
Working on a use case which entails finding All containers/artifacts that match certain field conditions. The idea is to run an API query against SOAR artifact end point to get all the artifacts a...
See more...
Working on a use case which entails finding All containers/artifacts that match certain field conditions. The idea is to run an API query against SOAR artifact end point to get all the artifacts and use the returned artifact fields in further fulfilling automation. A few questions in this respect 1)Does SOAR support API filtering like described in this article - https://medium.com/@lovely_peel_hamster_92/splunk-phantom-rest-api-filters-956a58854bfc Specifically the ability to access child objects in JSON. Documentation does not seem to mention anything about accessing child objects. https://docs.splunk.com/Documentation/Phantom/4.10.7/PlatformAPI/RESTQueryData 3)Also when filters are applied, we seem to lose the ability to restrict the output to a list of fields. It returns the entire JSON while the requirement is for specific fields. What we are actually trying to achieve - Check for closed SNow INCs and close corresponding Splunk ES notables, and SOAR containers. We have broken down the approach into modules and have the component parts working but the aforementioned filtering is tripping us up - Solving the problem will help us complete the playbook. I also found this and we are attempting something very similar - https://community.splunk.com/t5/Splunk-SOAR/Playbook-run-on-bulk-events/m-p/667251. Again, the filtering is key to completing this. Also, open to suggestions on approach to achieve the above. Thanks! in advance.
Currently when I open the dashboard for the first time, dropdown starts searching and after a few seconds it returns 'no results found'. I do not want it to search on first load rather I want to show...
See more...
Currently when I open the dashboard for the first time, dropdown starts searching and after a few seconds it returns 'no results found'. I do not want it to search on first load rather I want to show a message ' Please select a value in the dropdown'. Please assist. I am beginner as far Splunk knowledge is concerned.