All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @richgalloway @gcusello , When I ra n splunk btool --debug check on the host, I observe the following;  C:\Program Files\SplunkUniversalForwarder\bin>splunk btool --debug check No spec file f... See more...
Hi @richgalloway @gcusello , When I ra n splunk btool --debug check on the host, I observe the following;  C:\Program Files\SplunkUniversalForwarder\bin>splunk btool --debug check No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\apps\SplunkUniversalForwarder\local\app.conf Checking: C:\Program Files\SplunkUniversalForwarder\etc\system\default\alert_actions.conf Invalid key in stanza [webhook] in C:\Program Files\SplunkUniversalForwarder\etc\system\default\alert_actions.conf, line 229: enable_allowlist (value: false). No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\system\default\app.conf No spec file for: C:\Program Files\SplunkUniversalForwarder\etc\apps\windows_test\local\app.conf windows_test is the app where I had deployed the configurations. Thanks     
Hi Splunkers,    I'm having the multiselect value that results need pass to a macros,    Can you please help for that?    The need is to pass the multiselect values to token $macros2$, where multi... See more...
Hi Splunkers,    I'm having the multiselect value that results need pass to a macros,    Can you please help for that?    The need is to pass the multiselect values to token $macros2$, where multiselect values is an macros itself, multi select values 1. value 1 2.  value 2 3. value 3 4. All   search: `macros1(`$macros2$`,  now(), -15d@d, *, virus, *, *, *)` Thanks in Advance! Manoj Kumar S
Hi @secneer, to better escribe your question, could you share some screenshot? then, where do you located the props.conf containing SEDCMD? have you intermediate Heavy Forwarders between the Unive... See more...
Hi @secneer, to better escribe your question, could you share some screenshot? then, where do you located the props.conf containing SEDCMD? have you intermediate Heavy Forwarders between the Universal Forwarder and the Indexers? Ciao. Giuseppe
Hi,  i have the below table data where i have timecharted for 1hr time span i want to remove the row which is in red colour as it is coming with different time when compare to other data.  can ... See more...
Hi,  i have the below table data where i have timecharted for 1hr time span i want to remove the row which is in red colour as it is coming with different time when compare to other data.  can i be using outlier command to perform this operation and how i can achieve this requirement. Thank you in advance,  _time B C D E F 2023-10-06 22:00             2023-10-07 22:00             2023-10-08 22:00             2023-10-09 09:00             2023-10-09 22:00             2023-10-10 09:00             2023-10-10 22:00             2023-10-11 22:00            
Thank you @richgalloway and @gcusello for the response... but those unfortunately weren't the answers I was looking for.  Now I realise I may have not explained it the best I could; I apologise for ... See more...
Thank you @richgalloway and @gcusello for the response... but those unfortunately weren't the answers I was looking for.  Now I realise I may have not explained it the best I could; I apologise for that. The field that has been SEDCMD appears as an available field even if I search for data that does not have it in the logs.  Say, it's been easily over 10 hours since the restart. Searching, right now, for the data of the last 15 minutes still shows that field, showing that it's in 100% of the logs of that search. That's what I don't understand/know how to fix.  Thanks!  
Thanks for the answer.  Everyting seems to be ok.  disk not full, licenses ok, rebooted several times, restarted splunk several times. But still we don't receive  data into indexes.  To save time, I... See more...
Thanks for the answer.  Everyting seems to be ok.  disk not full, licenses ok, rebooted several times, restarted splunk several times. But still we don't receive  data into indexes.  To save time, I wondered if it's possible to backup some files $SPLUNK_HOME/etc, and then reinstall splunk sw +  copy files into new installation.    Do you think it will work? Rgds Geir 
Hi @deephi , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thanks for the help. This is noted.
Hi @secneer , as @richgalloway said, the SEDCMD command on props.conf works at index time on the new arriving data. this means that you are masking your data from the moment in which you restarted ... See more...
Hi @secneer , as @richgalloway said, the SEDCMD command on props.conf works at index time on the new arriving data. this means that you are masking your data from the moment in which you restarted Splunk after the SEDCMD insertion. The masking will work on the new data, not on the old ones. The old data (already indexed) cannot be modified until their deletion when the bucket will exceed the retention time. Ciao. Giuseppe
Hi @dgwann, if you run a search using dbquery, you should have as result a table that you can display as you like, as every other kind of Splunk panel. it isn't relevant (for the display) where the... See more...
Hi @dgwann, if you run a search using dbquery, you should have as result a table that you can display as you like, as every other kind of Splunk panel. it isn't relevant (for the display) where the data come from. Ciao. Giuseppe
Hi @deephi , as @inventsekar said and as you can read at https://docs.splunk.com/Documentation/Splunk/9.1.1/Installation/Systemrequirements#Unix_operating_systems Splunk UF is compatible with every ... See more...
Hi @deephi , as @inventsekar said and as you can read at https://docs.splunk.com/Documentation/Splunk/9.1.1/Installation/Systemrequirements#Unix_operating_systems Splunk UF is compatible with every Linux having kernel 3.x or higher. As you can read in an answer of few days ago, the issue could be that AWS Linux has kernel 6.x that isn't clearly declared compatible in the above link even if it's written "Kernel 5.4 or higher". I am confident that it's fully compatible also with kernel 6.x. Ciao. Giuseppe
The ERR_CONNECTION_TIMED_OUT error is a common issue in web browsing, signifying that your browser couldn't establish a connection to the target website within a specified time frame. To address this... See more...
The ERR_CONNECTION_TIMED_OUT error is a common issue in web browsing, signifying that your browser couldn't establish a connection to the target website within a specified time frame. To address this, first, check your internet connection and ensure it's stable. Try loading other websites to confirm if the problem is site-specific. If it persists, clear your browser cache and cookies, as these may be causing conflicts. Temporarily disable your firewall or antivirus software to see if they're blocking the connection. Alternatively, reboot your router and modem, and ensure that your DNS settings are configured correctly. If none of these solutions work, contact your ISP or the website administrator to troubleshoot potential network issues on their end.
Hello, Just checking through if the issue was resolved or you have any further questions?
Hello @mjuestel2, The annotations Dashboard would be based on the MITRE Technique value we provide in the correlation searches. Also, it's not savedsearches based on which panels work upon - it's th... See more...
Hello @mjuestel2, The annotations Dashboard would be based on the MITRE Technique value we provide in the correlation searches. Also, it's not savedsearches based on which panels work upon - it's the Risk Data Model -    Please let me know if you have any questions about the same. Also, please accept the solution and hit Karma, if this helps!
And are you sure the data isn't being indexed with wrong timestamp? Did you check the index contents outside of the supposed time ranges.
You need to extract special capture groups from each match called _KEY_1 and _VAL_1  
Hi Team, I am trying to create a topic manually using Confluent Control Center (localhost:9021) and then using Connect-->connect-default-->Connector-->Upload connector config file I am uploading the... See more...
Hi Team, I am trying to create a topic manually using Confluent Control Center (localhost:9021) and then using Connect-->connect-default-->Connector-->Upload connector config file I am uploading the splunk sink properties which already have splunk.hec.token. But still I am getting this error "splun.hec.token" is invalid in Confluent UI(@nd screenshot) in browser. Appreciate If anybody can help here? Please note we are tryinf in Ubuntu OS and Splunk, Confluent, Kafka Connect all the components are in same network in same server.   Splunk Sink properties: name=TestConnector topics=mytopic tasks.max=1 connector.class=com.splunk.kafka.connect.SplunkSinkConnector splunk.hec.token=453a412d-029f-4fcf-a896-8c388241add0 splunk.indexes=Attest splunk.hec.uri=https://localhost:8889 splunk.hec.raw=true splunk.hec.ack.enabled=true splunk.hec.ssl.validate.cert=false splunk.hec.ack.poll.interval=20 splunk.hec.ack.poll.threads=2 splunk.hec.event.timeout=300 splunk.hec.ssl.validate.certs=false    
Hello I monitor metrics and limits for multiple AppDynamics controllers in a common dashboard: Use the AppDynamics REST API to gather data. Create a dashboard with tools like Grafana or Table... See more...
Hello I monitor metrics and limits for multiple AppDynamics controllers in a common dashboard: Use the AppDynamics REST API to gather data. Create a dashboard with tools like Grafana or Tableau. Store data centrally for historical analysis. Automate data collection and scheduling. Set up alerts for thresholds and limits. Go through: https://docs.appdynamics.com/appd/22.x/22.2/en/extend-appdynamics/appdynamics-apis-Salesforce Marketing Cloud Certification Thank you.
To automate the monthly restart and failover processes for your AppDynamics servers, you can use Ansible playbooks or workflows in vRealize Orchestrator (vRO) or ServiceNow. These automation scripts ... See more...
To automate the monthly restart and failover processes for your AppDynamics servers, you can use Ansible playbooks or workflows in vRealize Orchestrator (vRO) or ServiceNow. These automation scripts should include the steps to patch, restart, and, if applicable, perform failover for your servers. Set up scheduling, logging, and notifications for monitoring and alerting, and thoroughly test the automation in a non-production environment before deploying it in your production environment.
Thanks for sharing. Very helpful.