All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

IMHO: I don't like or suggest you to add delete permissions to anyone permanently! It isn't great idea to run scheduled job which are removing events from splunk.
There are several tools to do server monitoring stuff. To be honest core splunk is not one of the best tools for that task. Core Splunk's best part is to manage those logs and also integration to othe... See more...
There are several tools to do server monitoring stuff. To be honest core splunk is not one of the best tools for that task. Core Splunk's best part is to manage those logs and also integration to other Splunk's tools/servers which are much better for that tasks. From Splunk offering you should look this https://www.splunk.com/en_us/products/infrastructure-monitoring.html for pure server monitoring.
Other options than IA are Edge Processor, Ingest Processor and/or frozen buckets. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/EdgeProcessor/AmazonS3Destination https://docs.splunk.c... See more...
Other options than IA are Edge Processor, Ingest Processor and/or frozen buckets. https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/EdgeProcessor/AmazonS3Destination https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/IngestProcessor/AmazonS3Destination https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/Admin/DataSelfStorage With IA, EP and IP output format is JSON, actually HEC suitable format. With IP you can also select Parquet if you want. If you are running your Enterprise in AWS, then you could configure that your frozen buckets will stored on S3 buckets. Based on your cold2frozen script you could store only raw data into S3 or more if you really want it.
Hi @asadnafees138  I assume you are using Ingest Actions to send your data to S3? If that is the case then you have the option of either raw, newline-delimited JSON or JSON format. This makes it eas... See more...
Hi @asadnafees138  I assume you are using Ingest Actions to send your data to S3? If that is the case then you have the option of either raw, newline-delimited JSON or JSON format. This makes it easier for other tools or re-ingestion in the future as it is not stored in a proprietary format. If you are ever looking to use Federated Search for S3 to search your S3 data in the future then this requires newline-delimited JSON (ndjson). For more information I'd recommend checking out the Ingest Actions Architecture docs.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Team, I need to backup my Splunk logs data on AWS S3 Bucket. But i need to confirm in which format logs will be stored, so incase i need that logs in future i will convert in readable form. ... See more...
Hello Team, I need to backup my Splunk logs data on AWS S3 Bucket. But i need to confirm in which format logs will be stored, so incase i need that logs in future i will convert in readable form.  Please note, i really need to know the exact logs format of data stored in Splunk. Please confirm. Regards, Asad Nafees
create a python script to collect the logs and send it to splunk? why even use splunk then? if you have log collection method and a bootstrap dashboard with a small local database; you have a full ... See more...
create a python script to collect the logs and send it to splunk? why even use splunk then? if you have log collection method and a bootstrap dashboard with a small local database; you have a full fledged monitoring app.  there are a thousand monitoring apps out there.  whats one more?
Thank you How can I use eval like here? I mean here status field contains someother text before and after the 'Host is down' and 'database is down' values and it varies every event. just wanted to... See more...
Thank you How can I use eval like here? I mean here status field contains someother text before and after the 'Host is down' and 'database is down' values and it varies every event. just wanted to put status=like(*host is down*)
Here are some guidance how to resolve the problem https://community.splunk.com/t5/Splunk-Search/How-to-delete-duplicate-events/td-p/70656?_gl=1*va2tii*_gcl_au*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAyLjE4MjI0OTQ... See more...
Here are some guidance how to resolve the problem https://community.splunk.com/t5/Splunk-Search/How-to-delete-duplicate-events/td-p/70656?_gl=1*va2tii*_gcl_au*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAyLjE4MjI0OTQwOTkuMTc0NzIzMTA3Ni4xNzQ3MjMxMDc1*FPAU*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAy*_ga*MTE2NjU0NjgxNC4xNzQ3MDQ5MDAy*_ga_5EPM2P39FV*czE3NDczOTczMzQkbzkkZzEkdDE3NDczOTkwNTckajAkbDAkaDIwMzUxMzI1NzU.*_fplc*ZWZ1MWJ5V3h4NFVZd1ZpMVJqc2xKOU1WdHo2WTNQdU1OcUlhWlE3bGpXdXU3ZENuYjJFOXppSDNCSVRvcENOcUxuaUpWSU5FUkpGaXFNMG9DN0slMkYlMkZKdUtPVWZncHhEY1lmUDQlMkI1RFJGU2NOVmhYaDFJSlpxMWszNDRHbDB3JTNEJTNE
I have an identical situation as described here <https://community.splunk.com/t5/Reporting/How-to-not-include-the-duplicated-events-while-accelerating-the/m-p/244884>
Hi @Harikiranjammul  Use an eval statement with a conditional to build the description field based on the value of status. |makeresults | eval Server="host1", Status="host is down", Threshold="unab... See more...
Hi @Harikiranjammul  Use an eval statement with a conditional to build the description field based on the value of status. |makeresults | eval Server="host1", Status="host is down", Threshold="unable to ping" | append [| makeresults | eval Db="db1", Status="database is down", Instance_status="DB instance is not available"] | eval date=strftime(_time, "%d/%m/%Y %H:%M:%S") | eval description=case( Status=="database is down", "date=" . date . " Db=" . Db . " Status=" . Status . " Instance_status=" . Instance_status, Status=="host is down", "date=" . date . " Server=" . Server . " Status=" . Status . " Threshold=" . Threshold )   This SPL checks the Status field and constructs the description field by concatenating the relevant fields for each case. Ensure your field names match exactly (case-sensitive) and are extracted correctly before using this logic.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Thank you! I changed from the table command to the fields command.  When I tried to use a drilldown again (set host_token = $row.host$), it still didn't work...any ideas?
| eval description=case(Status="host is down",Date.",".Server.",".Status.",".Threshold,Status="database is down",Date.",".Db.",".Status.",".'Instance status')
Have events like below 1) date-Timestamp Server - hostname Status - host is down Threshold - unable to ping   2)  Date-Timestamp Db - dbname Status- database is down Instance status- DB ins... See more...
Have events like below 1) date-Timestamp Server - hostname Status - host is down Threshold - unable to ping   2)  Date-Timestamp Db - dbname Status- database is down Instance status- DB instance is not available    I would need to write Eval condition and create new field description that if field status is " database is down" , I need to add date, dB, status, Instances status fields to description field   And if status is host down, need to add date,server, status, threshold to description field.
Thanks @livehybrid  The issue was at the firewall side. It was dropping packets, so the policies/rules had to applied again. Also added some useful link which i came across. https://community.splun... See more...
Thanks @livehybrid  The issue was at the firewall side. It was dropping packets, so the policies/rules had to applied again. Also added some useful link which i came across. https://community.splunk.com/t5/Security/ERROR-TcpOutputFd-Connection-to-host-c-9997-failed-sock-error/m-p/487895    https://splunk.my.site.com/customer/s/article/Splunk-Universal-Forwarder-is-not-sending-events-to-Splunk-Cloud  
it was an issue with firewall rules, as it dropped packets.
Hi @Ciccius  This is likely to be an issue with permissions, please could you validate that the permissions within the rest_ta app in $SPLUNK_HOME/etc/apps/rest_ta is the same across your SHC?   ... See more...
Hi @Ciccius  This is likely to be an issue with permissions, please could you validate that the permissions within the rest_ta app in $SPLUNK_HOME/etc/apps/rest_ta is the same across your SHC?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi asimit, thank you very much, it was a permission issue. I don't know why the user/group for app rest_ta was root/root, once I reset to splunk/splunk it worked. Thanks!
Hi @StephenD1 , Yes, Splunk Enterprise 9.4.0 (build 6b4ebe426ca6) is affected by CVE-2024-7264. This vulnerability affects libcurl versions between 7.32.0 and prior to 8.9.1, and as you confirmed,... See more...
Hi @StephenD1 , Yes, Splunk Enterprise 9.4.0 (build 6b4ebe426ca6) is affected by CVE-2024-7264. This vulnerability affects libcurl versions between 7.32.0 and prior to 8.9.1, and as you confirmed, your installation includes libcurl 7.61.1, which falls within this range. ## Official Fix According to the latest information: - The Splunk fix is identified as SPL-270280 - The fix has been included in Splunk Enterprise 9.4.2 - The fix has also been backported to supported older versions: 9.3.4, 9.2.6, and 9.1.9 ## Recommended Actions ### Option 1: Upgrade to a Patched Version The most comprehensive solution is to upgrade to one of the fixed versions: - Splunk Enterprise 9.4.2 (preferred for your current version) - Or one of the other patched versions (9.3.4, 9.2.6, or 9.1.9) ### Option 2: Disable KVStore (MongoDB) Temporarily If you cannot upgrade immediately, you can consider disabling the KVStore service, which uses MongoDB: 1. Check if any critical apps depend on KVStore: ``` splunk list kvstore -collections ``` 2. Disable KVStore: ``` splunk disable kvstore splunk restart ``` 3. Verify MongoDB is no longer running: ``` ps -ef | grep mongo ``` Note that disabling KVStore will impact any apps that rely on it, including: - Enterprise Security - ITSI - Splunk App for Infrastructure - Some custom apps that use KVStore collections ### Option 3: Mitigate Risk Through Network Controls If you can't upgrade or disable KVStore: - Ensure MongoDB is properly configured to only listen on localhost - Implement additional network controls to restrict access to the MongoDB port (typically 8191) - Monitor for potential exploitation attempts ## Additional Information You can find more details in the Splunk article regarding this vulnerability: https://splunk.my.site.com/customer/s/article/Splunk-vulnerability-libcurl-7-32-0-8-9-1-DoS-CVE-2024-7264 The CVE-2024-7264 is a denial-of-service vulnerability in libcurl that could allow a malicious server to cause a denial of service by sending specially crafted responses that trigger excessive memory consumption. ## Long-term Recommendation For a more permanent solution, plan to upgrade to the patched version as soon as your change management process allows. This is especially important if you have internet-facing Splunk components that might be vulnerable to this exploitation vector. Please give  for support  happly splunking .... 
Hi @Ciccius  Based on the error message you're receiving, this appears to be a permissions issue with the rest_ta app. The specific error "Could not find writer for: /nobody/rest_ta/app/install/stat... See more...
Hi @Ciccius  Based on the error message you're receiving, this appears to be a permissions issue with the rest_ta app. The specific error "Could not find writer for: /nobody/rest_ta/app/install/state" suggests that Splunk doesn't have the proper permissions to update the app's state. ## Troubleshooting steps: 1. **Check permissions on the app directory**: ``` sudo ls -la /opt/splunk/etc/apps/rest_ta/ ``` Make sure the directory and files are owned by the Splunk user and group. 2. **Fix permissions if needed**: ``` sudo chown -R splunk:splunk /opt/splunk/etc/apps/rest_ta/ sudo chmod -R 755 /opt/splunk/etc/apps/rest_ta/ ``` 3. **Try disabling the app manually before deployment**: - On the deployment server, edit `/opt/splunk/etc/apps/rest_ta/default/app.conf` - Set `state = disabled` in the `[install]` section - Or completely remove the app if it's not needed: `sudo rm -rf /opt/splunk/etc/apps/rest_ta/` 4. **Check for file system issues**: - The error might indicate file system corruption or disk issues - Run `df -h` to check disk space (you mentioned this is fine) - Run `sudo touch /opt/splunk/etc/test.txt` to verify write permissions to the directory 5. **Validate the deployment server's configuration**: ``` sudo /opt/splunk/bin/splunk show shcluster-bundle-status ``` 6. **Restart Splunk on both servers**: ``` sudo /opt/splunk/bin/splunk restart ``` 7. **Deploy without the problematic app**: - Temporarily move the app out of the deployment directory - Try the deployment again - If successful, the issue is definitely with the app itself If the issue persists, you may need to check Splunk logs for more details: ``` sudo cat /opt/splunk/var/log/splunk/splunkd.log | grep rest_ta ``` Let me know if any of these steps help resolve the issue!
Hi all, I am trying to deploy my apps from the deployment server with the command:  /opt/splunk/bin/splunk apply shcluster-bundle -target https://splunksrc:8089 -preserve-lookups true It never fai... See more...
Hi all, I am trying to deploy my apps from the deployment server with the command:  /opt/splunk/bin/splunk apply shcluster-bundle -target https://splunksrc:8089 -preserve-lookups true It never failed to do the task but now I am getting this error: Error while deploying apps to first member, aborting apps deployment to all members: Error while deleting app=rest_ta on target=https://splunksrc:8089: Non-200/201 status_code=500; {"messages":[{"type":"ERROR","text":"\n In handler 'localapps': Cannot update application info: /nobody/rest_ta/app/install/state = disabled: Could not find writer for: /nobody/rest_ta/app/install/state [0] [/opt/splunk/etc]"}]} Both the nodes (deployment and splunksrc) have enough disk space. Any ideas? Thanks Francesco