All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Seems like your problem is that the ID in your XML is wrong - you have duplicated  <p id="personal_valueA_token_id">$personal_valueA_token$</p> <p id="personal_valueB_token_id">$personal... See more...
Seems like your problem is that the ID in your XML is wrong - you have duplicated  <p id="personal_valueA_token_id">$personal_valueA_token$</p> <p id="personal_valueB_token_id">$personal_valueB_token$</p> Your original duplicates the personal_valueA_token_id <p id="personal_valueA_token_id">$personal_valueA_token$</p> <p id="personal_valueA_token_id">$personal_valueB_token$</p>
In Developer tools you posted example gives this  
The regex is simple enough. ^.*\sSetting\sconnector\s(?<connector_event>[^\s]+).*$
前往设置 -> 用户界面 -> 导航菜单,然后编辑相关应用的菜单。
@mohsplunking  - Errors definitely seems to be related to SSL certificate file or SSL certificate configuration in Splunk. * Its more broader topic to tell exactly what's wrong. * But need to check... See more...
@mohsplunking  - Errors definitely seems to be related to SSL certificate file or SSL certificate configuration in Splunk. * Its more broader topic to tell exactly what's wrong. * But need to check SSL certs configured on Splunk and then for those SSL files check expiration date and validation of cert file. * Make sure Splunk config not having any issues.   I hope this helps!!!
Hi Community, please help me how to extract BOLD/underlines value from below string: [2025-01-22 13:33:33,899] INFO Setting connector ABC_SOMECONNECTOR_CHANGE_EVENT_SRC_Q_V1 state to PAUSED (org.apa... See more...
Hi Community, please help me how to extract BOLD/underlines value from below string: [2025-01-22 13:33:33,899] INFO Setting connector ABC_SOMECONNECTOR_CHANGE_EVENT_SRC_Q_V1 state to PAUSED (org.apache.kafka.connect.runtime.Worker:1391)
I had to use a combination of plain text and a JavaScript variable for this to work. var splQuery = "makeresults"; var SearchManager = require("splunkjs/mvc/searchmanager"); var mysearch = new Sear... See more...
I had to use a combination of plain text and a JavaScript variable for this to work. var splQuery = "makeresults"; var SearchManager = require("splunkjs/mvc/searchmanager"); var mysearch = new SearchManager({ id: "mysearch", autostart: "false", search: "| " + splQuery });
@gumusservi- Can you please provide the query you are running?
@newsplunkuser  - It is definitely possible. We have many production systems on VMs. Verify below details first: Each VM should have its own unique IP address (private or public). One VM should b... See more...
@newsplunkuser  - It is definitely possible. We have many production systems on VMs. Verify below details first: Each VM should have its own unique IP address (private or public). One VM should be able to access IP address of other VM, if it is not, then that is a Networking issue, which needs to be fixed. Install Splunk Enterprice on both the VMs   Once you verify above then you can configure Splunk to receive and forward data: For Splunk Indexer Machine: Setup Data Receiving from UI Settings or Setup Data receiving through inputs.conf [splunktcp:9997]   For Splunk Forwarder Machine: Setup Data Forwarding from UI Settings or Setup Data forwarding through outputs.conf [tcpout] defaultGroup = my_indexer [my_indexer] server = <ip-of-indexer-vm>:9997   I hope this helps!!! Kindly upvote if it does!!!
Afternoon, I've been beating my head against the keyboard the last few days trying to get this to work. I want to exclude these two event codes from being indexed. This is what my inputs.conf file ... See more...
Afternoon, I've been beating my head against the keyboard the last few days trying to get this to work. I want to exclude these two event codes from being indexed. This is what my inputs.conf file looks like: [default] host = "hostname" [splunktcp://9997] connection_host = ip [WinEventLog://Security] disabled=0 current_only=1 blacklist=5447,6417 I save the file, restart splunk from Settings -> Server Controls -> Restart Splunk. Wait about 30 minutes or so to see if the event codes are being dropped from my index. No Joy.  I've tried adding in sourcetype=WinEventLog:Security, changing the blacklist#, tried using this [WinEventLog://Security] disabled=0 current_only=1 blacklist1= EventCode ="5447" Message="A Windows Filtering Platform filter has changed*" Still no joy.  
You're asking the lookup command to find the description field, but the field doesn't exist yet.  The lookup command needs an input field which it will then search for in the given lookup table.  It ... See more...
You're asking the lookup command to find the description field, but the field doesn't exist yet.  The lookup command needs an input field which it will then search for in the given lookup table.  It returns the requested fields that are in the same row as the located input field. In the example, we probably want the description associated with the destination port.  If so, the command might look like this | lookup tcp-udp DEST_PORT as port OUTPUT description AS desc, port  Also, the DPT field in the eval command doesn't exist so the if will always evaluate to false.
There is minor issue with the lookup command syntax. | lookup tcp-udp port OUTPUT description   after lookup name there should be the field which is common between lookup and splunk data, which in... See more...
There is minor issue with the lookup command syntax. | lookup tcp-udp port OUTPUT description   after lookup name there should be the field which is common between lookup and splunk data, which in our case is port number.   Lookup Command Doc - https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Lookup   I hope this helps!!! Kindly upvote if it does!!
@isoutamo @gcusello . We have a deployer but used few years back and from last couple of year each app we are deploying on search heads manually without using deployer. I recently tried to deploy ap... See more...
@isoutamo @gcusello . We have a deployer but used few years back and from last couple of year each app we are deploying on search heads manually without using deployer. I recently tried to deploy app using deployer and see there was password mismatch. 1. Now If I set the password on a deployer and match with all search heads, Is it okay if I push an app or do I see any issues ? 2. If I push apps from deployer will there be any issues with apps exist on search heads not on deployer ? 3. After deployment Will there be only changes for apps existing on both deployer and search head ?
Which standard add on you are talking about or related add on where to get them from can you tell any name or example
Combing through firewall logs.  I am extracting source, destination, dest_port.   I have a csv lookup file with ports and descriptions of those ports, both udp and tcp.   I want to take the descrip... See more...
Combing through firewall logs.  I am extracting source, destination, dest_port.   I have a csv lookup file with ports and descriptions of those ports, both udp and tcp.   I want to take the description from the lookup and add to the results in a table.  Here is my search: | stats count by SRC, DST, DEST_PORT | lookup tcp-udp description OUTPUT description AS desc, port | eval desc=if(DPT = port, description, "not ok") | table SRC, DST, DEST_PORT, port, desc the port and desc field are blank and say "not ok" respectively.  I'm stuck...  
9.5/10.0 (depending on actual future version) has the fix. Meaning the functionality is restored. Not backported for 9.3.x/9.4.x.  
Ah ok - that helpful info. the SPL-263518 on both 9.3 and 9.4 releases doesnt really state it was a regression and no link there explaining that...would be easier as a consumer if that SPL linked to ... See more...
Ah ok - that helpful info. the SPL-263518 on both 9.3 and 9.4 releases doesnt really state it was a regression and no link there explaining that...would be easier as a consumer if that SPL linked to a longer writeup/explanation. Do you happen to know if there a plan/timeline for re-adding it? Will it go into like 9.3.3 and 9.4.1 or will 9.3 and 9.4 just keep this regression and then 9.5 will re-add perhaps?
>Does this re-enable the log(s) ?  Yes >we need to re-enable group=per_source_thruput so we can rely on that check Apply the workaround. >was this removed for a security reason or just si... See more...
>Does this re-enable the log(s) ?  Yes >we need to re-enable group=per_source_thruput so we can rely on that check Apply the workaround. >was this removed for a security reason or just simply to reduce local log writes, etc?  Accidentally got removed( regression)
Did you tied( if it's one of the input types mentioned) ? run_only_one= <boolean> * Determines if a scripted or modular inputs runs on one search head in SHC.  
One thing which I “found” (fortunately in test with backups). If your deployer is down enough long time your SHC members lost all apps which have deployed by deployer! Have you lost your deployer or... See more...
One thing which I “found” (fortunately in test with backups). If your deployer is down enough long time your SHC members lost all apps which have deployed by deployer! Have you lost your deployer or only connection between it and members? If first then it should be enough that you restore the connectivity between those. Of course you must ensure that deployer still has all those apps which have previously deployed to members. If you have lost whole node and you haven’t backup and you must build it from scratch then there are some things which you must check and update before you can put it back online. Ensure that you have those apps there what you have previously deployed with same configuration on place. Ensure that lookups are correct. check that deployment modes are globally and app level correctly setup (how local and default are transferred into members) check that how lookups should push into members override or keen member version? check if members have same splunk.secrets and if copy this into deployer before start it first time then those what you and @gcusello already mentioned  check that all nodes have same time! Maybe something else? If you could do and test this on test environment, do it first and check what are issues which arise after deployer is back online. If you haven’t the definite you must take backup of all those nodes when they are offline! And include kvstore backup too. I would like to hear how this succeed after you have done it!