All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I would not recommend posting valid authorization tokens on the internet, as unscrupulous people or bots could abuse them. Could you try curl-ing the collector health endpoint using HTTPS instead of... See more...
I would not recommend posting valid authorization tokens on the internet, as unscrupulous people or bots could abuse them. Could you try curl-ing the collector health endpoint using HTTPS instead of http? If it still does not give a response, it might be a firewall issue. Try connecting to the machine itself using ssh and then doing a curl on localhost, like this: curl -k https://127.0.0.1:8088/services/collector/health  
Does the app work if you delete that addon_builder.conf file from the tgz file? It seems to contain a lookup, event types, props, tags, and transforms. These should still work even if Splunk complain... See more...
Does the app work if you delete that addon_builder.conf file from the tgz file? It seems to contain a lookup, event types, props, tags, and transforms. These should still work even if Splunk complains about an older add-on builder version.
stoomart, your script appears to be exactly what I need, when I run the script in a RHEL 9 box it immediately transitions to the fsck command with a single bucket option, then displays an error messa... See more...
stoomart, your script appears to be exactly what I need, when I run the script in a RHEL 9 box it immediately transitions to the fsck command with a single bucket option, then displays an error message "path is not extant".  Your thoughts?  Also please clarify where you show "{index-name}", are you using the brackets to indicate placeholders or are they to be used in the thawed bucket path and at the end?   Thank you in advance for your help!  
In this case I suspect starting at the end and working backwards might be helpful. WMI - While it's not terrible for some small testing, I'd suggest not using it because it's *far* more difficult to... See more...
In this case I suspect starting at the end and working backwards might be helpful. WMI - While it's not terrible for some small testing, I'd suggest not using it because it's *far* more difficult to set up, manage, and deal with than using a Universal Forwarder on the actual endpoint.  The UF installs easily, is tiny and efficient, and *also uninstalls easily and completely too*.  And don't take my word for it, Splunk also has docs for this.  I know, it'll sound like they're "pushing the UF for some nefarious reason" but there's nothing nefarious about it, it's just better in nearly every way than using WMI.   https://docs.splunk.com/Documentation/Splunk/9.2.1/Data/ConsiderationsfordecidinghowtomonitorWindowsdata Even neater is to spend the few minutes - it's not terribly hard! - to set up the forwarders to use your splunk as a deployment server. Then on your Splunk you *can* create remote inputs, but instead of being some unreliable "pull" over wmi, it'll be configs sent to the UF to tell it how to collect them locally and send in those logs. And with those changes, all your complaints about WMI will disappear.  I mean, you may have new  complaints, but they won't be about WMI.  "Could not find userBaseDN on the LDAP server" is just generally just 'incorrect configuration'.  Some time in ADSI Edit and the various AD tools may help here. And network devices - it truly depends on your familiarity with syslog etc, but even having had been a Windows admin I found getting network device data into Splunk was at least as easy as getting Windows data in.  You literally started with what I think is the hard part.    There's one or two extra moving parts, but they're all simple, isolated parts in the device->syslog->UF->Splunk path that are easily understood and worked with, vs. the "magic" and weird stuff that the Windows event logs can sometimes conjure up. And a note - we're all 100% volunteers here.  I'm sure the comment about "no time wasters" was just frustration speaking, and that's understandable.  But it did come off as somewhat unkind and I'm sure you would have gotten something of an answer much quicker without that.  No one here that I've ever seen wants to waste your time.  We're all spending our free time trying to help people. 
Did you get this figured out? We are currently fighting the same issue.
Maybe just try to upgrade it to the oldest 8.x version you can get on the downloads page, then uninstall it after that?  
If I understand you correctly, you are configuring a linux host with a Splunk Enterprise installation (not Universal Forwarder installation?) and configuring it to retrieve deployment configurations ... See more...
If I understand you correctly, you are configuring a linux host with a Splunk Enterprise installation (not Universal Forwarder installation?) and configuring it to retrieve deployment configurations from a second server, and you are saying that the first machine properly appears on the "Deployment Clients" interface of the second server when its on version 9.1.3 of Splunk but not on version 9.2.0.1?
It's terrible, they're not easily accessible except through the UI.  It's a big ... sore spot for some of us who need to use these in a more programmatic way. But, there is a way using the REST inte... See more...
It's terrible, they're not easily accessible except through the UI.  It's a big ... sore spot for some of us who need to use these in a more programmatic way. But, there is a way using the REST interface from cURL. curl -k -u <username>:<password> https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input Obviously fix the username and password to an admin one, and your hostname if it's not on localhost.  You might want to pipe that through jq to 'pretty print' it if you have jq installed because otherwise it's all smashed together and hard to read: curl -k -u <username>:<password> https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input | jq .  You can also see only an individual one if you append the _key's value for the one you want to the end.  (The _key comes from the output of one of the earlier commands.) curl -k -u <username>:<password> https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input/6452ce6e55102d0ad735ec31 | jq . You can also delete them or edit them, though ... obviously be careful and do this in a test environment at first! curl -k -u <username>:<password> https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input/6452ce6e55102d0ad735ec31 -X DELETE And I've not found a good way to "edit" them, but it's pretty trivial to just edit the JSON you get from an individual entry, and load that back in wholesale. curl -k -u <username>:<password> https://localhost:8089/servicesNS/nobody/splunk_app_db_connect/storage/collections/data/dbx_db_input -d '{ "inputName" : "newEntryforMyDB", "value" : "200", "appVersion" : "3.16.0", "columnType" : 4, "timestamp" : "2024-03-21T13:11:41.633-05:00", "_user" : "nobody", "_key" : "65fc6ce1764e95450b0d98e1" }' -H "Content-Type: application/json" Which would overwrite entry 65fc6... with that new information. Happy Splunking, Rich  
Your *exact* example doesn't make much sense - why would y-d be y1 instead of y2? But at least some of this may be as simple as "makemv" and/or "mvexpand". In your example, it appears as if abcde a... See more...
Your *exact* example doesn't make much sense - why would y-d be y1 instead of y2? But at least some of this may be as simple as "makemv" and/or "mvexpand". In your example, it appears as if abcde are all multi-value fields (the "mv" in the two above commands).  If that's so, ... | mvexpand parameter should make the original into 13 rows.  Once they're separated, perhaps there's some other eval/conditionals you can use to get each output row to include the correct value? If that doesn't work, you may need something like ... ... | makemv delim=" " parameter | mvexpand parameter In any case I think you'll be two steps closer and we can iterate from there.   happy Splunking, Rich
@yuanliu apologies my bad - moving inputlookup at the end is returning all results (NOT just search results)   index="demo1" source="demo2" | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" |... See more...
@yuanliu apologies my bad - moving inputlookup at the end is returning all results (NOT just search results)   index="demo1" source="demo2" | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_name>.+)]:" | search test_field_name="test_field_name_1" | table _raw id_num | reverse | filldown id_num [inputlookup sample.csv | fields FailureMsg | rename FailureMsg AS search | format]    Could you please help ?
@yuanliu Thank you for your response again. Apologies for my wording if it created any confusion. I will be more careful going forward. You're right, I meant my search did not return any results in m... See more...
@yuanliu Thank you for your response again. Apologies for my wording if it created any confusion. I will be more careful going forward. You're right, I meant my search did not return any results in my context.  This query returned my matching search results events . I noticed that id_num field in the search results was blank as I was using filldown to populate id_num fields   index="demo1" source="demo2" [inputlookup sample.csv | fields FailureMsg | rename FailureMsg AS search | format] | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_name>.+)]:" | search test_field_name="test_field_name_1" | table _raw id_num | reverse | filldown id_num     I moved lookup at the end after filldown and I see id_num field as well in search results table     index="demo1" source="demo2" | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_name>.+)]:" | search test_field_name="test_field_name_1" | table _raw id_num | reverse | filldown id_num [inputlookup sample.csv | fields FailureMsg | rename FailureMsg AS search | format]      
I'm also having a similar problem. The "user menu" for my Splunk UI is simply not there. With this being the case, I'm not able to change my preferences or simply logout. Any help would be greatly ap... See more...
I'm also having a similar problem. The "user menu" for my Splunk UI is simply not there. With this being the case, I'm not able to change my preferences or simply logout. Any help would be greatly appreciated. 
First, please do not use phrases like "does not work" because it conveys little information in the best scenario.  There are many ways a search "does not work".  There could be an error message.  The... See more...
First, please do not use phrases like "does not work" because it conveys little information in the best scenario.  There are many ways a search "does not work".  There could be an error message.  There could be no error, and no output.  There could be output, but not what you expected. And so on and so on. I assume that what you meant was that the search gave no output.  The problem, then, is that your raw events do NOT have a field named FailureMsg as your OP implied. (I tried to clarify in my previous response.) The fact that index="demo1" source="demo2" ("fail_msg1" OR "fail_msg2") returns results only means that the terms "fail_msg1", "fail_msg2" exist in some events; you need to be explicit about what fields are available at search time. If you do not have a suitable field name in raw events to limit the search, subsearch can still be used to match straight terms by using a pseudo keyword search. index="demo1" source="demo2" [inputlookup sample.csv | fields FailureMsg | rename FailureMsg AS search | format] | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_name>.+)]:" | search test_field_name=test_field_name_1 | table _raw id_num | reverse | filldown id_num
As per the below screenshot my server is not giving any health status of hec port 8088. Due to this I am not able to publish anything by using hec token in Splunk for an example : curl -k "Authori... See more...
As per the below screenshot my server is not giving any health status of hec port 8088. Due to this I am not able to publish anything by using hec token in Splunk for an example : curl -k "Authorization: Splunk ee6d8a90-4863-4789-9ff1-fda810bee6f2" http://walvau-vidi-1:8000/services/collector/event -d '{"event": "hello world"}'. Please guide me what will issue, how I investigate further on this. default inputs.conf : [http] disabled=1 port=8088 enableSSL=1 dedicatedIoThreads=2 maxThreads = 0 maxSockets = 0 useDeploymentServer=0 # ssl settings are similar to mgmt server sslVersions=*,-ssl2 allowSslCompression=true allowSslRenegotiation=true ackIdleCleanup=true local inputs.conf: [http] disabled = 0 enableSSL = 0
Indeed it seems you are stuck with the version from 2 years ago, since this app has not been updated since then. Best thing to do in this case is to suppress the warning message and wait for an update.
Hello Bitdefender team,  Could you kindly assist with updating the Bitdefender GravityZone Add-on for Splunk? Currently, we are experiencing difficulties uploading the add-on per the integrations in... See more...
Hello Bitdefender team,  Could you kindly assist with updating the Bitdefender GravityZone Add-on for Splunk? Currently, we are experiencing difficulties uploading the add-on per the integrations instructions provided in: https://www.bitdefender.com/business/support/en/77211-171475-splunk.html  and we're receiving the following error message: “The Add-on Builder version used to create this app (4.1.0) is below the minimum required version of 4.1.3. Please re-generate your add-on using Add-on Builder 4.1.3 or later. File: default/addon_builder.conf Line Number: 4” Your prompt attention to this matter would be greatly appreciated.    
Thanks! @ITWhisperer  This is really helpful. The only problem is - The shared query I tried and it is not able to fetch the final status as succeed or failed. As per sample event , platform ind... See more...
Thanks! @ITWhisperer  This is really helpful. The only problem is - The shared query I tried and it is not able to fetch the final status as succeed or failed. As per sample event , platform index has message field which is having this text as marked request as succeed or marked request as failed. attaching snap for reference.
@yuanliu Thank you for your reply . The following block works for me when run independently .   index="demo1" source="demo2" | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "... See more...
@yuanliu Thank you for your reply . The following block works for me when run independently .   index="demo1" source="demo2" | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_name>.+)]:" | search test_field_name="test_field_name_1" | table _raw id_num | reverse | filldown id_num     and this query works     | inputlookup sample.csv | fields FailureMsg   but this block does not work for me      index="demo1" source="demo2" [inputlookup sample.csv | fields FailureMsg]      Tried this block as well, it did not work for me    index="demo1" source="demo2" [ | inputlookup sample.csv | fields FailureMsg ]   Since above query did not work, entire block you suggested did not work as well    index="demo1" source="demo2" [inputlookup sample.csv | fields FailureMsg] | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_name>.+)]:" | search test_field_name=test_field_name_1 | table _raw id_num | reverse | filldown id_num   This query works for me when I search for fail_msg1 or fail_msg2   index="demo1" source="demo2" ("fail_msg1" OR "fail_msg2")   any idea how to search this using inputlookup or lookup?  
I assume you have already tried these or similar openssl commands? openssl x509 -in certname.crt -out certname.pem -outform PEM openssl x509 -inform DER -in certname.crt -out certname.pem -text Co... See more...
I assume you have already tried these or similar openssl commands? openssl x509 -in certname.crt -out certname.pem -outform PEM openssl x509 -inform DER -in certname.crt -out certname.pem -text Could you also try renaming the .crt directly to .pem? You might be lucky and it will already be in the PEM format.
Hi @mfonisso, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend tha... See more...
Hi @mfonisso, I’m a Community Moderator in the Splunk Community. This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you!