All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Missprint here: In normal cases Splunk replies in something like "[App Key Value Store migration] Starting migrate-kvstore." .
Hello, Splunkers! I'v just change storageEngine to wiredTiger on my single instance.   [root@splunk-1 opt]# /opt/splunk/bin/splunk version Splunk 8.1.10.1 (build 8bfab9b850ca) [root@splunk-1 opt]#... See more...
Hello, Splunkers! I'v just change storageEngine to wiredTiger on my single instance.   [root@splunk-1 opt]# /opt/splunk/bin/splunk version Splunk 8.1.10.1 (build 8bfab9b850ca) [root@splunk-1 opt]# /opt/splunk/bin/splunk show kvstore-status --verbose This member: backupRestoreStatus : Ready date : Wed Apr 23 09:56:56 2025 dateSec : 1745391416.331 disabled : 0 guid : 3FA11F27-42E0-400A-BF69-D15F6B534708 oplogEndTimestamp : Wed Apr 23 09:56:55 2025 oplogEndTimestampSec : 1745391415 oplogStartTimestamp : Wed Apr 23 09:50:13 2025 oplogStartTimestampSec : 1745391013 port : 8191 replicaSet : 3FA11F27-42E0-400A-BF69-D15F6B534708 replicationStatus : KV store captain standalone : 1 status : ready storageEngine : wiredTiger KV store members: 127.0.0.1:8191 configVersion : 1 electionDate : Wed Apr 23 09:55:23 2025 electionDateSec : 1745391323 hostAndPort : 127.0.0.1:8191 optimeDate : Wed Apr 23 09:56:55 2025 optimeDateSec : 1745391415 replicationStatus : KV store captain uptime : 95   Now I'm trying to upgrade mongo version from 3.6 to version v4.2. According to mongod.log my current version is: 2025-04-23T06:55:21.374Z I CONTROL [initandlisten] db version v3.6.17-linux-splunk-v4   Now according to docs I'm trying to migrate to another version of mongo manually but get the following message: [root@splunk-1 opt]# /opt/splunk/bin/splunk migrate migrate-kvstore [App Key Value Store migration] Collection data is not available.   Whst Splunk means by that? "Collection data is not available". I have several collections in my Splunk. I haven't found any case in Community. In normal cases Splunk replies in something like "Collection data is not available" . It seems that I do something wrong in genelal Thanks  
Thanks all for the help, I will try a regex that match both. I learn a lot with you guys thanks !!!!
It's a KV store collection and can be found at $SPLUNK_HOME/etc/apps/TA-Akamai_SIEM/default/collections.conf
@DaltonCarmon  When you change the Splunk password, either via the GUI or via the CLI, the $SPLUNK_HOME\etc\passwd file is updated and thereafter user-seed.conf is ignored. However, if $SPLUNK_HOM... See more...
@DaltonCarmon  When you change the Splunk password, either via the GUI or via the CLI, the $SPLUNK_HOME\etc\passwd file is updated and thereafter user-seed.conf is ignored. However, if $SPLUNK_HOME\etc\passwd is ever deleted, user-seed.conf will again specify the default admin login password. Place user-seed.conf in C:\Program Files\Splunk\etc\system\local (not default). Files in local override default and are meant for custom configurations.   https://docs.splunk.com/Documentation/Splunk/latest/Admin/User-seedconf    To configure the default username and password, place the user-seed.conf file in $SPLUNK_HOME\etc\system\local. You must restart Splunk for these settings to take effect.   Note: If the $SPLUNK_HOME\etc\passwd file exists, the configurations in user-seed.conf will be ignored.
@amitrinx  Pls check this, I used makeresults command for dummydata.  | makeresults | eval raw_json="[ {\"user\":\"user1@example.com\",\"status\":\"sent\",\"ip_address\":\"192.168.1.10\",\"reply... See more...
@amitrinx  Pls check this, I used makeresults command for dummydata.  | makeresults | eval raw_json="[ {\"user\":\"user1@example.com\",\"status\":\"sent\",\"ip_address\":\"192.168.1.10\",\"reply\":\"Message accepted\",\"event_id\":\"EVT001\",\"message_id\":\"MSG001\",\"template_id\":\"TPL001\",\"template_name\":\"welcome\",\"smtp_code\":\"250\",\"time\":\"2025-04-23T10:00:00Z\",\"encryption\":true,\"service\":\"email_service\"}, {\"user\":\"user2@example.com\",\"status\":\"queued\",\"ip_address\":\"192.168.1.20\",\"reply\":\"Queued for delivery\",\"event_id\":\"EVT002\",\"message_id\":\"MSG002\",\"template_id\":\"TPL002\",\"template_name\":\"reset_password\",\"smtp_code\":\"451\",\"time\":\"2025-04-23T10:05:00Z\",\"encryption\":false,\"service\":\"notification_service\"}, {\"user\":\"user3@example.com\",\"status\":\"failed\",\"ip_address\":\"192.168.1.30\",\"reply\":\"Mailbox not found\",\"event_id\":\"EVT003\",\"message_id\":\"MSG003\",\"template_id\":\"TPL003\",\"template_name\":\"alert\",\"smtp_code\":\"550\",\"time\":\"2025-04-23T10:10:00Z\",\"encryption\":true,\"service\":\"security_service\"}, {\"user\":\"user4@example.com\",\"status\":\"opened\",\"ip_address\":\"192.168.1.40\",\"reply\":\"Email opened\",\"event_id\":\"EVT004\",\"message_id\":\"MSG004\",\"template_id\":\"TPL004\",\"template_name\":\"newsletter\",\"smtp_code\":\"200\",\"time\":\"2025-04-23T10:15:00Z\",\"encryption\":true,\"service\":\"marketing_service\"} ]" | spath input=raw_json path={} output=event | mvexpand event | spath input=event | table user status reply service    
I am currently working with data from SendGrid Event API that is being ingested into Splunk. The data includes multiple email events (e.g., delivered, processed) wrapped into a single event, and this... See more...
I am currently working with data from SendGrid Event API that is being ingested into Splunk. The data includes multiple email events (e.g., delivered, processed) wrapped into a single event, and this wrapping seems to happen randomly.   Here is a sample of the data structure:     [ { "email": "example@example.com", "event": "delivered", "ip": "XXX.XXX.XXX.XX", "response": "250 mail saved", "sg_event_id": "XXXX", "sg_message_id": "XXXX", "sg_template_id": "XXXX", "sg_template_name": "en", "smtp-id": "XXXX", "timestamp": "XXXX", "tls": 1, "twilio:verify": "XXXX" }, { "email": "example@example.com", "event": "processed", "send_at": 0, "sg_event_id": "XXXX", "sg_message_id": "XXXX", "sg_template_id": "XXXX", "sg_template_name": "en", "smtp-id": "XXXX", "timestamp": "XXXX", "twilio:verify": "XXXX" } ] I am looking for a query that can help me extract the email, event, and response (reason) fields from this data, even when multiple events are wrapped into a single event entry.   Could anyone please provide guidance on the appropriate Splunk query to achieve this?
Hello, We have a few hundred hosts and a handful of customers. I have a csv file with serverName,customerID. I've been able to add the customerID to incoming events using props.conf/transforms.conf... See more...
Hello, We have a few hundred hosts and a handful of customers. I have a csv file with serverName,customerID. I've been able to add the customerID to incoming events using props.conf/transforms.conf on the HF but I have no luck with metric data. Background - I like to use the customerID later for search restriction in roles. any suggestions where to start troubleshooting? Kind Regards Andre  
@PickleRick thanks for your response. Yes, it's configured properly but tcpdump showed nothing coming to port 514. It seems the problem might be on the UCS side. As someone on the Cisco community sug... See more...
@PickleRick thanks for your response. Yes, it's configured properly but tcpdump showed nothing coming to port 514. It seems the problem might be on the UCS side. As someone on the Cisco community suggested, tried to run on UCS side "ethanalyzer local interface mgmt capture-filter "port 514" limit-captured-frames 0 detail" but looks like it's not generating any traffic to send out port 514 on UCS itself and hence no data on the rsyso
I was sending a alert using the teams app on the splunk base, which posts a card message to the teams. I want to send a plaintext message using webhook because the customer wants to receive a plainte... See more...
I was sending a alert using the teams app on the splunk base, which posts a card message to the teams. I want to send a plaintext message using webhook because the customer wants to receive a plaintext message rather than a card message. Can I use the $result.field$ token for the message content in the payload? I should use the fields in the search results table. Goals 1. Post a plaintext message to msteams as a notification feature 2. Use the fields in the table of the notification search results as tokens
Hi @davidco  It'd be worth validating the Splunk receiving end and the logs available. Please could you check for HEC errors using: index=_internal reply!=0 HttpInputDataHandler For more info on ... See more...
Hi @davidco  It'd be worth validating the Splunk receiving end and the logs available. Please could you check for HEC errors using: index=_internal reply!=0 HttpInputDataHandler For more info on reply codes see https://docs.splunk.com/Documentation/Splunk/9.4.1/Data/TroubleshootHTTPEventCollector Any error reply codes here may provide more insights.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ganesanvc  Looking at the square braces there, it looks like you're running the sub-search part in the SPL search box, try removing the [ and ] so that we can see if that works independetly. ... See more...
Hi @ganesanvc  Looking at the square braces there, it looks like you're running the sub-search part in the SPL search box, try removing the [ and ] so that we can see if that works independetly.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @addOnGuy  I think the target would be <yourApp>/bin/ta_ignio_integration_add_on/ If you look in that folder - is the previous version of splunk-sdk in there? pip install --upgrade splunk-sdk -... See more...
Hi @addOnGuy  I think the target would be <yourApp>/bin/ta_ignio_integration_add_on/ If you look in that folder - is the previous version of splunk-sdk in there? pip install --upgrade splunk-sdk --target <yourAppLocation>/bin/ta_ignio_integration_add_on/    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @JoaoGuiNovaes  I think every 30 days is way too infrequent for this - You would want the service accounts adding fairly soon after they're first seen so the info can be used in other searches. ... See more...
Hi @JoaoGuiNovaes  I think every 30 days is way too infrequent for this - You would want the service accounts adding fairly soon after they're first seen so the info can be used in other searches. Personally I would run it more frequently, e.g. hourly, or every 4 hours. I usually look back (earliest) equiv to the time since the previous run minus an extra 10 mins to account for lag, so something like earliest=-70m latest=-10m (60 minute period, running every hour).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @danielbb  It could be something like a field extraction happening after the line breaking which is causing this, or something else. Without access to your instance we could do with seeing some s... See more...
Hi @danielbb  It could be something like a field extraction happening after the line breaking which is causing this, or something else. Without access to your instance we could do with seeing some sample logs along with a btool output ($SPLUNK_HOME/bin/splunk btool props list <sourceTypeName>) for your event's sourcetype.  The thread you posted from 2013 looks like could have been related to the events having a line-break in. Please let us know if you're able to provide a sample + props output.  Thanks
Hi @Prajwal_Kasar  This means that urllib3 v2.x is not compatible with the version of OpenSSL (1.0.2) installed in your Splunk Cloud Python environment. Even though you may have bundled your own lib... See more...
Hi @Prajwal_Kasar  This means that urllib3 v2.x is not compatible with the version of OpenSSL (1.0.2) installed in your Splunk Cloud Python environment. Even though you may have bundled your own libraries, you can't change the underlying OpenSSL on Splunk Cloud.  urllib3 v2.0+ dropped support for OpenSSL < 1.1.1 however many environments (including Splunk Cloud's Python and underlying OS) still use OpenSSL 1.0.2. To fix this you need to Pin urllib3 to v1.x I would try and install a specific urllib3 package 1.26.18 into your lib/deps folder along with winrm, as 1.26.18 supports OpenSSL 1.0.2.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
This worked for me! The other 2 above did not for some reason. Thanks
Hi all, We want to test if a cluster bundle on cluster manager needs to restart the cluster peers using the REST API.  In the first step we run a POST against: https://CLM:8089/services/cluster/ma... See more...
Hi all, We want to test if a cluster bundle on cluster manager needs to restart the cluster peers using the REST API.  In the first step we run a POST against: https://CLM:8089/services/cluster/manager/control/default/validate_bundle?output_mode=json check-restart=true in body and check for json.entry[0].content.checksum to get the checksum of the new bundle. If there is no checksum, there is no new bundle. Second we check the checksum against GET: https://CLM:8089/services/cluster/manager/info?output_mode=json json.entry[0].content.last_validated_bundle.checksum json.entry[0].content.last_dry_run_bundle.checksum to verify if the bundle check and test of the restart is completed and consider json.entry[0].content.last_check_restart_bundle_result if the restart is nessary or not. Unfurtunatly we see that the value of  json.entry[0].content.last_check_restart_bundle_result changes, even if last_dry_run_bundle.checksum and last_dry_run_bundle.checksum are set to the correct values.   to make a long story short we see that the red value is changing, while green is not changing. which is unexprected for us. tested against v9.2.5 and v9.4.1. At the moment is looks like a timing issue for me and i want to avoid sleep() code.    Is there a more solid way to check if restart is necessary or not?  best regards,   Andreas  
Hey, Email was release today from Splunk Cloud Platform Team stating  to fix this issue we should patch up to 9.4.0, 9.3.2, 9.2.4 or 9.1.7 as you have mentioned above. Last month in the "Splunk Sec... See more...
Hey, Email was release today from Splunk Cloud Platform Team stating  to fix this issue we should patch up to 9.4.0, 9.3.2, 9.2.4 or 9.1.7 as you have mentioned above. Last month in the "Splunk Security Advisories" it said to patch up to 9.4.1, 9.3.3, 9.2.5, and 9.1.8 so if we are on the 9.4.1, 9.3.3, 9.2.5, and 9.1.8 versions, we are in the fix? Second question,  If Splunk issued the recommendation to patch up to a higher level patch, why would they come back and recommend patch to a lower version with security vulnerabilities instead of patching up?
There is absolutely no problem with running rsyslog on the same box as splunk provided that you're not trying to bind the same port(s) to both programs. Have you configured rsyslog to receive networ... See more...
There is absolutely no problem with running rsyslog on the same box as splunk provided that you're not trying to bind the same port(s) to both programs. Have you configured rsyslog to receive network data on proper ports? Did you verify it is listening? Did you check with tcpdump/wireshark whether UCS is sending data?