All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a reliable base query to find events containing the information I want. I built a rex using the field extractor, but applying the rex expression in a search does not yield any results, the va... See more...
I have a reliable base query to find events containing the information I want. I built a rex using the field extractor, but applying the rex expression in a search does not yield any results, the values(gts_percent) column is always blank Sample query: index="june_analytics_logs_prod" $serial$ log_level=info message=*hardware_controller*| rex field=message "(?=[^G]*(?:GTS weight:|G.*GTS weight:))^(?:[^\.\n]*\.){7}\d+\w+,\s+\w+:\s+(?P<gts_percent>\d+)"| convert rmunit(gts_percent)| chart values(gts_percent) by _time   Sample raw_ result : {"bootcount":8,"device_id":"XXX","environment":"prod_walker","event_source":"appliance","event_type":"GENERIC","local_time":"2025-02-20T00:47:48.124-06:00", "location":{"city":"XX","country":"XX","latitude":XXX,"longitude":XXX,"state":"XXX"}, "log_level":"info","message":"martini::hardware_controller: GTS weight: 17.05kg, tare weight: 8.1kg, net weight: 8.95kg, fill weight: 6.8kg, percent: 100%\u0000", "model_number":"XXX","sequence":403659,"serial":"XXX","software_version":"2.3.0.276","ticks":0,"timestamp":1740034068,"timestamp_ms":1740034068124}   I am trying to extract the bold value in the raw, Where is my rex messing up?
We have successfully  ingested from an AWS SQS queue guardduty logs Its structured JSON , but the extracted records are all predicated with a 'BodyJson' . A workaround for field aliases and extract... See more...
We have successfully  ingested from an AWS SQS queue guardduty logs Its structured JSON , but the extracted records are all predicated with a 'BodyJson' . A workaround for field aliases and extractions is to use that as a predicate on the local/props.conf eg EVAL-dest_name = if('BodyJson.detail.service.resourceRole'="TARGET" AND 'BodyJson.detail.resource.resourceType'="AccessKey", 'BodyJson.detail.resource.accessKeyDetails.userName', null()) But thats Pretty messy and will need maintaining I tried to flatten out using a props.conf FIELDALIAS-BodyJsonremove = BodyJson.* as * But that didnt work Has anyone another soln, other than local/props.conf . Is there something in the aws_sqs_tasks.conf (inputs) that can flatten the json to the format the TA for Amazon expects ? Thanks
I have a dashboard in an app that uses a .js file. I'd like to understand how to modify it, given that I'm on Splunk Cloud. I've already searched my platform, but I can't find anything like add-ons f... See more...
I have a dashboard in an app that uses a .js file. I'd like to understand how to modify it, given that I'm on Splunk Cloud. I've already searched my platform, but I can't find anything like add-ons for .js files. However, I'm sure that my file is located at it-IT/static/app/app/yourfilename.js. I'd also like to know if, after modifying my file, I need to perform any refresh.
Hi all I am trying to append data to results based on a file. Example temperature and pressure are stored at 1 sample per minute all the time. The times when a batch was in production is logged in ... See more...
Hi all I am trying to append data to results based on a file. Example temperature and pressure are stored at 1 sample per minute all the time. The times when a batch was in production is logged in a look up file. Batch IDs are generated and stored after logging of sensor values to Splunk. Base search: index=ndx sourcetype=srctp (sensor=temperature OR sensor=pressure) earliest=-1d@d latest=@d | table _time sensor value Result: _time sensor value ... temperature 75 ... pressure 100   Look up file has 3 columns, the start and finish time for a batch and the ID startTime finishTime batchID ... ... b1 ... ... b2 ... ... b3   For each row in the result table of the base search, I want to append the batch ID to give _time sensor value batchID ... temperature 75 b2 ... pressure 100 b2   I have tried with | lookup batch.csv _time >= startTime, _time <= finishTime OUTPUTNEW batchID The lookup needs to find a batch where the value of _time >= startTime AND _time<=finishTime I can't see anything for lookup that works with this sort of conditions. Only where fields are a direct match. Any ideas would be appreciated, thanks in advance.
Hi According to the documentation https://docs.splunk.com/observability/en/gdi/integrations/cloud-azure.html#cloud-azure the resource with the type microsoft.cache/redis are included. It is related ... See more...
Hi According to the documentation https://docs.splunk.com/observability/en/gdi/integrations/cloud-azure.html#cloud-azure the resource with the type microsoft.cache/redis are included. It is related with the Redis instances  of Premium SKU in Azure. And I definitely can look at the metrics of such objects. But what about resources with the type microsoft.cache/redisenterprise  (It is relates with the Redis instances of Enterprise SKU in Azure). While creating or changing parameters for Azure integration in Splunk Observability Cloud I could not find type microsoft.cache/redisenterprise in the list of available types. Is this resource type included  in the Azure Integration? Thank You
Hello, I used to use CSS Style custom values to set specific width : <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #Panel1{width:15% !im... See more...
Hello, I used to use CSS Style custom values to set specific width : <row id="MasterRow"> <panel depends="$alwaysHideCSS$"> <title>Single value</title> <html> <style> #Panel1{width:15% !important;} #Panel2{width:85% !important;} </style> </html> </panel> <panel id="Panel1">....</panel> <panel id="Panel2">....</panel> </row> Still since last week update (9.xx) this code doesn't work anymore and my panels came back to normal (50/50 or 33/33/33) etc... Any thought about the new code it requires to fix the width/height please?  I can't afford to switch all my dashboard into Dashboard studio format for the coming weeks... Thank you by advance
Hi. Working with dashoards I found that I can put legen only at the bottom of the chart. And this is described here https://docs.splunk.com/observability/en/data-visualization/charts/chart-options.... See more...
Hi. Working with dashoards I found that I can put legen only at the bottom of the chart. And this is described here https://docs.splunk.com/observability/en/data-visualization/charts/chart-options.html#show-on-chart-legend "This option lets you specify a dimension to be displayed in a legend below the chart." But if I have dimensions for several objects on the charts I can see only one or two names in the legend, even If I have 3 or more. I do not have place for all of them. I can open chart in full screen asn still cannot observe every name in the legend. Can I scroll all the names on the legend somehow to make visible all of them? Is it possible to put legends on the right side of the chart?   Thank You
Working on a use case which entails finding All containers/artifacts that match certain field conditions. The idea is to run an API query against SOAR artifact end point to get all the artifacts a... See more...
Working on a use case which entails finding All containers/artifacts that match certain field conditions. The idea is to run an API query against SOAR artifact end point to get all the artifacts and use the returned artifact fields in further fulfilling automation. A few questions in this respect 1)Does SOAR support API filtering like described in this article - https://medium.com/@lovely_peel_hamster_92/splunk-phantom-rest-api-filters-956a58854bfc Specifically the ability to access child objects in JSON. Documentation does not seem to mention anything about accessing child objects. https://docs.splunk.com/Documentation/Phantom/4.10.7/PlatformAPI/RESTQueryData 3)Also when filters are applied, we seem to lose the ability to restrict the output to a list of fields. It returns the entire JSON while the requirement is for specific fields. What we are actually trying to achieve -  Check for closed SNow INCs and close corresponding Splunk ES notables, and SOAR containers. We have broken down the approach into modules and have the component parts working but the aforementioned filtering is tripping us up - Solving the problem will help us complete the playbook. I also found this and we are attempting something very similar - https://community.splunk.com/t5/Splunk-SOAR/Playbook-run-on-bulk-events/m-p/667251. Again, the filtering is key to completing this. Also, open to suggestions on approach to achieve the above. Thanks! in advance.  
Currently when I open the dashboard for the first time, dropdown starts searching and after a few seconds it returns 'no results found'. I do not want it to search on first load rather I want to show... See more...
Currently when I open the dashboard for the first time, dropdown starts searching and after a few seconds it returns 'no results found'. I do not want it to search on first load rather I want to show a message ' Please select a value in the dropdown'.  Please assist. I am beginner as far Splunk knowledge is concerned.
How to rename index name? We have already an index created which is receiving data. Now we want to change that index name and want all old data to be in this new index if user search with index=new i... See more...
How to rename index name? We have already an index created which is receiving data. Now we want to change that index name and want all old data to be in this new index if user search with index=new index and we want all new logs to be in this new index. indexes.conf [sony_app_JUPITER_prod] homePath   = volume:primary/$_index_name/db coldPath   = volume:primary/$_index_name/colddb thawedPath = $SPLUNK_DB/$_index_name/thaweddb   want our new index to be sony_app_235678_prod.   gone through this https://community.splunk.com/t5/Getting-Data-In/How-to-rename-an-index/m-p/28599 and we cannot stop Splunk and I didn't understood 3rd point. Can anyone please be more descriptive.   Thanks.
Hello Everyone,   Currently I am using ES 7.1.0 version. Recently but not sure exactly when, Maintenance team upgraded Splunk and ES. I was seperating types of security incidents by creating and u... See more...
Hello Everyone,   Currently I am using ES 7.1.0 version. Recently but not sure exactly when, Maintenance team upgraded Splunk and ES. I was seperating types of security incidents by creating and using Security Domains from security_domains.csv manually . As You know, there are 2 fields in the csv : security_domain and label. For example I've created a new security domain filling both of these columns named "EPP Endpoint", whenever a notable was created, It was creating as "Epp endpoint" which I haven't created like that. I mean I was expecting like "EPP Endpoint - my rule name - Rule" but This wasn't working as I expected.  Somehow without changing anything in security domains, I created another notable with It's domain "EPP Endpoint". Now My new notables are created as I expected before. For example : "EPP Endpoint - my new rule - Rule". I thought maybe It is relevant with the upgrade but I checked release notes and known issues, couldnt find any clue. Also in correlationsearches.csv lookup, I can still see the difference but my new notables are created as I expected.    I wonder Why it is working in 2 ways. It affects my whole architecture because I am fetching these notables into my SOAR and I have to define my correlations in order to determine It's types in SOAR and If I don't define correctly, then this notable won't be classified in a true way. What I mean is "Epp endpoint - rulename - Rule" and "EPP Endpoint - rulename - Rule" differs. I hope someone can help me with this issue. Thanks in advance.    
I have both Chinese and English field names from the Windows event log, and I would like to use field aliases so that the English field names can search for both the Chinese and English fields. Howev... See more...
I have both Chinese and English field names from the Windows event log, and I would like to use field aliases so that the English field names can search for both the Chinese and English fields. However, I found that some fields, such as src and src_port, are missing for the Chinese field. This issue might be related to the fact that field aliases take effect only after EXTRACT, REPORT, or LOOKUP. Does anyone have suggestions on how to effectively convert between Chinese and English field names? (The only thing I can do is on the SH; I am not allowed to change the other instance.)  
I have configured an app and added 7 different source files in a single inputs.conf with the same index name and sourcetype name Parent directory name is same for all 7 source files and sub director... See more...
I have configured an app and added 7 different source files in a single inputs.conf with the same index name and sourcetype name Parent directory name is same for all 7 source files and sub directory name changes. The log file resides in has an extension of *.log. But i am able to get only one log file sending events to Splunk. sample inputs.conf provided below. [monitor:///ABC-DEF50/Platform/*.log] disabled = false index = os_linux sourcetype = nix:messages crcSalt = <SOURCE> Last week's data for the remaining 6 source files i was able to see in the Splunk after 2-3 days only. I checked and could see delay in indexing is happening. How to fix this? kindly help
Hi I have this search | `es_notable_events` | search timeDiff_type=current | timechart minspan=30m sum(count) as count by security_domain when I am searching macro  `es_notable_events` it is no... See more...
Hi I have this search | `es_notable_events` | search timeDiff_type=current | timechart minspan=30m sum(count) as count by security_domain when I am searching macro  `es_notable_events` it is not showing any result for all apps and any owner but the search is giving results and when running macro it is also showing results.
Hi folks  I am looking to create a trellis view for pie chart in dashboard but unable to create and ending up below error . Could some one help on this do we have the possibility to create trellis us... See more...
Hi folks  I am looking to create a trellis view for pie chart in dashboard but unable to create and ending up below error . Could some one help on this do we have the possibility to create trellis using pie chart in dashboard.       
I am writing a simple TA to read a text file and turn it into a list of JSON events. I am getting a WARN message for each event from the TcpOutputProc process, such as the one below: 02-21-2025 01... See more...
I am writing a simple TA to read a text file and turn it into a list of JSON events. I am getting a WARN message for each event from the TcpOutputProc process, such as the one below: 02-21-2025 01:06:04.001 -0500 WARN TcpOutputProc [2061704 indexerPipe] - Pipeline data does not have indexKey. I removed the rest of the message containing details. It seems that I am missing something simple. I would greatly appreciate some insights/pointers towards debugging this issue. The TA code is here in GitHub: https://github.com/ww9rivers/TA-json-modinput Many thanks in advance!
Hello Team,parsing issue   I have built a distributed Splunk lab using a trial license. The lab consists of three indexers, one cluster manager, one search head, one instance serving as the Monitor... See more...
Hello Team,parsing issue   I have built a distributed Splunk lab using a trial license. The lab consists of three indexers, one cluster manager, one search head, one instance serving as the Monitoring Console (MC), Deployment Server (DS), and License Manager (LM), along with two Universal Forwarders. The forwarder is monitoring the /opt/log/routerlog directory, where I have placed two log files: cisco_ironport_web.log and cisco_ironport_mail.log. The logs are successfully forwarded to the indexers and then to the search head. However, log parsing is not happening as expected. I have applied the same configuration of props.conf and transforms.conf on both the indexer cluster and the search head.   props.conf and transforms.conf file paths : indexer path : /opt/splunk/etc/peer-apps/_cluster/local Search head  path : /opt/splunk/etc/apps/search/local   configuration of props.conf and transforms.conf :   transforms.conf : [extract_fields] REGEX = ^(?P<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s+(?P<src_ip>\d+\.\d+\.\d+\.\d+)\s+(?P<email>\S+@\S+)\s+(?P<domain>\S+)\s+(?P<url>\S+) FORMAT = timestamp::$1 src_ip::$2 email::$3 domain::$4 url::$5   props.conf : [custom_logs] SHOULD_LINEMERGE = false TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 19 TRANSFORMS-extract_fields = extract_fields    
Hello,  I have a fresh install of splunk and Meraki TA App.  I have configured several inputs in the App, however I am seeing a large number of these error messages under various inputs (for exam... See more...
Hello,  I have a fresh install of splunk and Meraki TA App.  I have configured several inputs in the App, however I am seeing a large number of these error messages under various inputs (for example, appliance_vpn_statuses, appliance_vpn_stats) in the following manner: 2025-02-24 03:12:56,971 WARNING pid=50094 tid=MainThread file=cisco_meraki_connect.py:col_eve:597 | Could not identify datetime field for input: cisco_meraki_appliance_vpn_statuses
Hi I am currently trying to reference an SPL variable in simple xml for a table panel in a dashboard. I would like each field value for the "Free space (%)" field to change depending on what the "col... See more...
Hi I am currently trying to reference an SPL variable in simple xml for a table panel in a dashboard. I would like each field value for the "Free space (%)" field to change depending on what the "color" variable in the query evaluates to(green or red). I found one method online which mentions creating a token in a set tag and then referencing in the colorpallete tag but I haven't been able to get it working:   <table> <title>TABLESPACE_FREESPACE</title> <search> <query> index="database" source="tables" | eval BYTES_FREE = replace(BYTES_FREE, ",", "") | eval BYTES_USED = replace(BYTES_USED, ",", "") | eval GB_USED = BYTES_USED / (1024 * 1024 * 1024) | eval GB_FREE = BYTES_FREE / (1024 * 1024 * 1024) | eval GB_USED = floor(GB_USED * 100) / 100 | eval GB_FREE = floor(GB_FREE * 100) / 100 | eval CALCULATED_PERCENT_FREE = (GB_FREE / (GB_USED + GB_FREE)) * 100 | eval CALCULATED_PERCENT_FREE = floor(CALCULATED_PERCENT_FREE * 10) / 10 | eval color = if(CALCULATED_PERCENT_FREE >= PERCENT_FREE, "#00FF00", "#FF0000") | rename TABLESPACE_NAME as "Tablespace", GB_USED as "Used Space (Gb)", GB_FREE as "Free Space (Gb)", PERCENT_FREE as "Free Space (%)" | table "Tablespace" "Used Space (Gb)" "Free Space (Gb)" "Free Space (%)" </query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <set token="color">$result.color$</set> </done> </search> <option name="count">21</option> <option name="drilldown">none</option> <option name="wrap">false</option> <format type="color" field="Free Space (%)"> <colorPalette type="expression">$color$</colorPalette> </format> </table>   Any help would be appreciated, thanks