All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I need help to assign text box value to radio button but it's not working.     <form> <label>assign text box value to Radio button</label> <fieldset submitButton="false"> <input ... See more...
Hello, I need help to assign text box value to radio button but it's not working.     <form> <label>assign text box value to Radio button</label> <fieldset submitButton="false"> <input type="radio" token="tokradio" searchWhenChanged="true"> <label>field1</label> <choice value="category=$toktext$">Group</choice> <default>category=$toktext$</default> <initialValue>category=$toktext$</initialValue> </input> <input type="text" token="toktext" searchWhenChanged="false"> <label>field2</label> </input> </fieldset> <row> <panel> <title>tokradio=$tokradio$</title> <table> <search> <query>| makeresults</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>   Thanks in advance. @bowesmana  @tscroggins @gcusello @yuanliu @ITWhisperer 
Hi   I'm facing an issue with creating a support ticket.   I'm on enterprise version for a company that has support account. I'm new in the security team, i ve tried to contact support via suppor... See more...
Hi   I'm facing an issue with creating a support ticket.   I'm on enterprise version for a company that has support account. I'm new in the security team, i ve tried to contact support via support form (4 times) but no answer. I've tried to call support, but they has answered me that i need to ask to my manager to add me via admin portal or contact support to help with. My manager isn't able to do that. This is  really blocking me, anyone has an advice?   Thx
Hi Team,  I have created a dashboard with the below mentioned query and the output would be in Column chart format with a timer. And the output would display the Top 10 Event Codes with Count when w... See more...
Hi Team,  I have created a dashboard with the below mentioned query and the output would be in Column chart format with a timer. And the output would display the Top 10 Event Codes with Count when we choose the time and click submit. index=windows host=* source=WinEventLog:System EventCode=* Type=Error OR Type=Critical | stats count by EventCode Type |sort -count | head 10 So post the results are displayed in the Column chart then my requirement is that if we click any one of the EventCode  consider as an example of 4628 from the Top 10 details in the Column Chart then it should  navigate to a new panel or a window showing up with the Top 10 host, source, Message, EventCode along with Count for Event Code 4628. So something like that we want to get the results displayed.  But this should happen if we click the EventCode from the Column chart of the existing dashboard. Example: index=windows host=* source=WinEventLog:System EventCode=4628 Type=Error OR Type=Critical | stats count by host source Message EventCode |sort -count | head 10   So kindly let me know how to achieve this requirement in a dashboard format.  
Hi there. A simple question, it's not for a real usage, just a curiosity Does UF block inputs for system paths by default? An example, teorically an inputs like this   [monitor:///...] whitel... See more...
Hi there. A simple question, it's not for a real usage, just a curiosity Does UF block inputs for system paths by default? An example, teorically an inputs like this   [monitor:///...] whitelist=. index=root sourcetype=root_all disabled=0   Should ingest all non binary files under the "/" paths, including subdirs. At the real fact, i find only the "/boot" path ingested. Is this a security feature to exclude system paths "/" from been ingested? Thanks
Splunk offline --enforce-count or data rebalance which one is better in case of migrating to new hardware and do i have to add peer to manual detention in indexer cluster before running a data rebala... See more...
Splunk offline --enforce-count or data rebalance which one is better in case of migrating to new hardware and do i have to add peer to manual detention in indexer cluster before running a data rebalance or splunkoffline?  
Hi,   I got one weird problem that when I run query in splunk, there're events found, but the Event log field is always blank.   However, the problem will be fixed by below steps: in the Splunk... See more...
Hi,   I got one weird problem that when I run query in splunk, there're events found, but the Event log field is always blank.   However, the problem will be fixed by below steps: in the Splunk search result, go to All Fields at left rail change Coverage: 1% more to All fields click Deselect All click Select All Within Filter Then the problem is fixed, I can see the Event logs. Even I change from All fields back to Coverage: 1% more, I can still see Events logs. But after I close the browser tab and go to Splunk and search again, the problem still exists. So the problem is, why Coverage: 1% more will have problem to me for the first time query?   Anyone has idea about this? Thanks.  
Trying to setup splunk otel collector using the image quay.io/signalfx/splunk-otel-collector:latest in docker desktop or Azure Container App to read the log from file using file receiver and splunk_h... See more...
Trying to setup splunk otel collector using the image quay.io/signalfx/splunk-otel-collector:latest in docker desktop or Azure Container App to read the log from file using file receiver and splunk_hec exporter. Howerver the receiving following error. 2024-03-07 12:56:27 2024-03-07T17:56:27.001Z info exporterhelper/retry_sender.go:118 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec", "error": "Post  https://splunkcnqa-hf-east.com.cn:8088/services/collector/event\": dial tcp 42.159.148.223:8088: i/o timeout (Client.Timeout exceeded while awaiting headers)", "interval": "2.984810083s"}   using the below config ============================ extensions: memory_ballast: size_mib: 500 receivers: filelog: include: - /var/log/*.log encoding: utf-8 fingerprint_size: 1kb force_flush_period: "0" include_file_name: false include_file_path: true max_concurrent_files: 100 max_log_size: 1MiB operators: - id: parser-docker timestamp: layout: '%Y-%m-%dT%H:%M:%S.%LZ' parse_from: attributes.time type: json_parser poll_interval: 200ms start_at: beginning processors: batch: exporters: splunk_hec: token: "XXXXXX" endpoint: "https://splunkcnqa-hf-east.com.cn:8088/services/collector/event" source: "otel" sourcetype: "otel" index: "index_preprod" profiling_data_enabled: true tls: insecure_skip_verify: true service: pipelines: logs: receivers: [filelog] processors: [batch] exporters: [splunk_hec]
Hi, I am trying to explore APM in Splunk Observability. However, I am facing challenge in setting up Alwayson Profiling. I am wondering if this feature is not available in Trial version. Can someone... See more...
Hi, I am trying to explore APM in Splunk Observability. However, I am facing challenge in setting up Alwayson Profiling. I am wondering if this feature is not available in Trial version. Can someone confirm it?
Hello, How to assign search_now value with info_max_time in _raw? I am trying to push "past" data using collect command into summary index.  I want to use search_now as a baseline time I ap... See more...
Hello, How to assign search_now value with info_max_time in _raw? I am trying to push "past" data using collect command into summary index.  I want to use search_now as a baseline time I appreciate your help.  Thank you Here's my attempt using some code from @bowesmana , but it gave me duplicate search_now:     index=original_index | addinfo | eval _raw=printf("search_now=%d", info_max_time) | foreach "*" [| eval _raw = _raw.case(isnull('<<FIELD>>'),"", mvcount('<<FIELD>>')>1,", <<FIELD>>=\"".mvjoin('<<FIELD>>',"###")."\"", true(), ", <<FIELD>>=\"".'<<FIELD>>'."\"") | fields - "<<FIELD>>" ] | collect index=summary testmode=false file=summary_test_1.stash_new name=summary_test_1" marker="report=\"summary_test_1\""      
Is the Geostats command supported by this visualization type for displaying city names in cluster bubbles? It seems not. Here is the command I am using for my result:     | (some result that prod... See more...
Is the Geostats command supported by this visualization type for displaying city names in cluster bubbles? It seems not. Here is the command I am using for my result:     | (some result that produces destination IP's and a total count by them) | iplocation prefix=dest_iploc_ dest_ip | eval dest_Region_Country=dest_iploc_Region.", ".dest_iploc_Country | geostats globallimit=0 locallimit=15 binspanlat=21.5 binspanlong=21.5 longfield=dest_iploc_lon latfield=dest_iploc_lat sum(Total) BY dest_Region_Country     In the search result visualization (which uses the old dashboard cluster map visualization and not the new dashboard studio one), this returns a proper cluster map showing this: There are bubbles showing areas on the grid where there were a lot of total connections. When moused over I can see the individual regions/cities contributing to this total. However, when I put this query into my Dashboard Studio visualization using Map > Bubble, it either breaks (when there are too many city values... because there are as many cities as there are), or when I change the grouping to use countries for example, it breaks in a different way when it tries to alphabetize all the countries under each bubble, like this: (I am obviously mousing over a bubble in Bogota, Colombia here, not Busan, South Korea or anywhere in Germany.) Not to mention the insane lag caused by this dashboard element. What to do for my use-case? Switch off of Dashboard Studio? That aside, anyone figure out a way to make interconnected bubbles/points showing sources and destinations like this (this is not intended as an ad, but an example)?  
Hi, we have a log that contains the amount of times any specific message has been sent by the user in every session. This log contains the user's ID (data.entity-id), the message ID (data.message-cod... See more...
Hi, we have a log that contains the amount of times any specific message has been sent by the user in every session. This log contains the user's ID (data.entity-id), the message ID (data.message-code), message name (data.message-name) and the total amount of times it was sent during each session (data.total-received). I'm trying to create a table where the 1st column shows the User's ID (data.entity-id), then each column shows the  sum of the total amount of times each message type (data.total-received) was received. Ideally I would be able to create a dashboard were I can have a list of the data.message-code's I want to be used as columns. Example data:    data: { entity-id: 1 message-code: 445 message-name: handshake total-received: 10 } data: { entity-id: 1 message-code: 269 message-name: switchPlayer total-received: 20 } data: { entity-id: 1 message-code: 269 message-name: switchPlayer total-received: 22 } data: { entity-id: 2 message-code: 445 message-name: handshake total-received: 12 } data: { entity-id: 2 message-code: 269 message-name: switchPlayer total-received: 25 } data: { entity-id: 2 message-code: 269 message-name: switchPlayer total-received: 30 } Ideally the table would look like this: Entity-id | handshake | switchPlayer 1 | 10 | 42 2 | 12 | 55   Is this possible? What would be the best way to store the message-code in a dashboard? Thanks!
Hi I am attempting to integrate Microsoft Azure with Splunk Enterprise to retrieve the status of App services. Could someone please provide a step-by-step guide for the integration? I have attach... See more...
Hi I am attempting to integrate Microsoft Azure with Splunk Enterprise to retrieve the status of App services. Could someone please provide a step-by-step guide for the integration? I have attached a snapshot for reference.
Dear Splunk Community,  I am here seeking your thoughts and suggestions on the error I am facing with TrackMe  ERROR search_command.py _report_unexpected_error 1013 Exception at "/opt/splunk/etc/ap... See more...
Dear Splunk Community,  I am here seeking your thoughts and suggestions on the error I am facing with TrackMe  ERROR search_command.py _report_unexpected_error 1013 Exception at "/opt/splunk/etc/apps/trackme/bin/splunkremotesearch.py", line 501 : This TrackMe deployment is running in Free limited edition and you have reached the maximum number of 1 remote deployment, only the first remote account (local) can be used Background: Objective - Setup TrackMe monitoring (a virtual tenant - dsm, dhm & mhm) for our remote Splunk deployment (SplunkCloud). TrackMe app is installed on our on-prem splunk instance and added  "Splunk targets URL and port" under Configuration --> Remote deployments accounts (only one account) No issues with the connectivity, it is successful (pic below)  We are using free license and as per trackme documentation, allowed to use 1 remote deployment. Could we use free license in our case or how to get rid off 'local' deployment? Please suggest.   
Getting this error via Power Shell for the Splunk Universall installation   Error below The term 'C:\Program Files\SplunkUniversalForwarder\bin\splunk' is not recognized as the name of a cmdlet... See more...
Getting this error via Power Shell for the Splunk Universall installation   Error below The term 'C:\Program Files\SplunkUniversalForwarder\bin\splunk' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. working.ps1:17 char:3   Here is how i defined my variables $SplunkInstallationDir = "C:\Program Files\SplunkUniversalForwarder" & "$SplunkInstallationDir\bin\splunk" start --accept-license --answer-yes --no-prompt   It works if i run manually only.   Kindly assist
hi team What minimum bandwidth is necessary between indexers and the rest of the platform elements (Heavy Forwarders, Search Heads, Master Cluster, License, Deployment Servers, etc.) for different c... See more...
hi team What minimum bandwidth is necessary between indexers and the rest of the platform elements (Heavy Forwarders, Search Heads, Master Cluster, License, Deployment Servers, etc.) for different communications?
Hello, We had an index that stopped receiving logs.  Since we do not manage the host sending the logs I wanted to get more information before reaching out.  The one interesting error that showed up ... See more...
Hello, We had an index that stopped receiving logs.  Since we do not manage the host sending the logs I wanted to get more information before reaching out.  The one interesting error that showed up right about the time the logs stopped was the following.  I have not been able to find anything useful about this type of error.  Also the error is being thrown from the indexer. Unable to read from raw size file="/mnt/splunkdb/<indexname>/db/hot_v1_57/.rawSize": Looks like the file is invalid. thanks for any assistance I can get.
Heavy Forwarder issues   on a 9.02 version0 and cant connect to indexer after an upgrade from 8.2.0 Anyone know of more current discussion other than this 2015 post: https://community.splunk.com/t5... See more...
Heavy Forwarder issues   on a 9.02 version0 and cant connect to indexer after an upgrade from 8.2.0 Anyone know of more current discussion other than this 2015 post: https://community.splunk.com/t5/Getting-Data-In/Why-am-I-getting-error-quot-Unexpected-character-while-looking/m-p/250699 Error httpclient request [6244 indexerpipe] - caught exception while parsing http reply: enexpected character while looking for value : '<' Error S25OverHttpOutputProcessor [6244 indexerpipe] - http 502 bad gateway
Hey Can someone help me with getting the profiling metrics like cpu and ram used by the app to show up in the splunk observation portal , I can get the app metrics so i have used a simple curl java a... See more...
Hey Can someone help me with getting the profiling metrics like cpu and ram used by the app to show up in the splunk observation portal , I can get the app metrics so i have used a simple curl java app which curls google every second and this shows up in apm metrics I have done all the configs to have the profiling enabled as per all teh docs in splunk but nothing shows up in the profiling section . is it because i am using the free trial ?  I am trying this on simple ec2 instance instrumenting the app using java jar command and I have been exporting the necessary vars and have added the required java options while instrumenting the app using splunk-otel-agent-collector.jar but nothing shows up please help.
Hello, Can someone help me with a search to find out whether any changes has been made to the splunk reports(ex:paloalto report) in last 30 days.   thanks
Hi Experts, I am encountering an issue  with using filter tokens in specific row on my dashboard. I have two filters named ABC and DEF, tokens represented for ABC is $abc$ and DEF is $def$.  I ... See more...
Hi Experts, I am encountering an issue  with using filter tokens in specific row on my dashboard. I have two filters named ABC and DEF, tokens represented for ABC is $abc$ and DEF is $def$.  I want to pass these tokens only to one specific row, while for others, I want to reject them.  For the rows where i need to pass the tokens, I've used the following syntax: <row depends="$abc$ $def$"></row> For the row where i don't want to use the token, I've used the following syntax; <row rejects="$abc$ $def$"></row>. However when i use the rejects condition, the rows are hidden. I want these rows to still be visible. Could someone please advise on how to resolve this issue?  I would appreciate every help. Thank you in advance!