Hello Team, I need to backup my Splunk logs data on AWS S3 Bucket. But i need to confirm in which format logs will be stored, so incase i need that logs in future i will convert in readable form. ...
See more...
Hello Team, I need to backup my Splunk logs data on AWS S3 Bucket. But i need to confirm in which format logs will be stored, so incase i need that logs in future i will convert in readable form. Please note, i really need to know the exact logs format of data stored in Splunk. Please confirm. Regards, Asad Nafees
create a python script to collect the logs and send it to splunk? why even use splunk then? if you have log collection method and a bootstrap dashboard with a small local database; you have a full ...
See more...
create a python script to collect the logs and send it to splunk? why even use splunk then? if you have log collection method and a bootstrap dashboard with a small local database; you have a full fledged monitoring app. there are a thousand monitoring apps out there. whats one more?
Thank you How can I use eval like here? I mean here status field contains someother text before and after the 'Host is down' and 'database is down' values and it varies every event. just wanted to...
See more...
Thank you How can I use eval like here? I mean here status field contains someother text before and after the 'Host is down' and 'database is down' values and it varies every event. just wanted to put status=like(*host is down*)
Here are some guidance how to resolve the problem https://community.splunk.com/t5/Splunk-Search/How-to-delete-duplicate-events/td-p/70656?_gl=1*va2tii*_gcl_au*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAyLjE4MjI0OTQ...
See more...
Here are some guidance how to resolve the problem https://community.splunk.com/t5/Splunk-Search/How-to-delete-duplicate-events/td-p/70656?_gl=1*va2tii*_gcl_au*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAyLjE4MjI0OTQwOTkuMTc0NzIzMTA3Ni4xNzQ3MjMxMDc1*FPAU*MTgyNjg5MzM4NS4xNzQ3MDQ5MDAy*_ga*MTE2NjU0NjgxNC4xNzQ3MDQ5MDAy*_ga_5EPM2P39FV*czE3NDczOTczMzQkbzkkZzEkdDE3NDczOTkwNTckajAkbDAkaDIwMzUxMzI1NzU.*_fplc*ZWZ1MWJ5V3h4NFVZd1ZpMVJqc2xKOU1WdHo2WTNQdU1OcUlhWlE3bGpXdXU3ZENuYjJFOXppSDNCSVRvcENOcUxuaUpWSU5FUkpGaXFNMG9DN0slMkYlMkZKdUtPVWZncHhEY1lmUDQlMkI1RFJGU2NOVmhYaDFJSlpxMWszNDRHbDB3JTNEJTNE
I have an identical situation as described here <https://community.splunk.com/t5/Reporting/How-to-not-include-the-duplicated-events-while-accelerating-the/m-p/244884>
Hi @Harikiranjammul Use an eval statement with a conditional to build the description field based on the value of status. |makeresults | eval Server="host1", Status="host is down", Threshold="unab...
See more...
Hi @Harikiranjammul Use an eval statement with a conditional to build the description field based on the value of status. |makeresults | eval Server="host1", Status="host is down", Threshold="unable to ping" | append [| makeresults | eval Db="db1", Status="database is down", Instance_status="DB instance is not available"] | eval date=strftime(_time, "%d/%m/%Y %H:%M:%S") | eval description=case( Status=="database is down", "date=" . date . " Db=" . Db . " Status=" . Status . " Instance_status=" . Instance_status, Status=="host is down", "date=" . date . " Server=" . Server . " Status=" . Status . " Threshold=" . Threshold ) This SPL checks the Status field and constructs the description field by concatenating the relevant fields for each case. Ensure your field names match exactly (case-sensitive) and are extracted correctly before using this logic. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thank you! I changed from the table command to the fields command. When I tried to use a drilldown again (set host_token = $row.host$), it still didn't work...any ideas?
| eval description=case(Status="host is down",Date.",".Server.",".Status.",".Threshold,Status="database is down",Date.",".Db.",".Status.",".'Instance status')
Have events like below 1) date-Timestamp Server - hostname Status - host is down Threshold - unable to ping 2) Date-Timestamp Db - dbname Status- database is down Instance status- DB ins...
See more...
Have events like below 1) date-Timestamp Server - hostname Status - host is down Threshold - unable to ping 2) Date-Timestamp Db - dbname Status- database is down Instance status- DB instance is not available I would need to write Eval condition and create new field description that if field status is " database is down" , I need to add date, dB, status, Instances status fields to description field And if status is host down, need to add date,server, status, threshold to description field.
Thanks @livehybrid The issue was at the firewall side. It was dropping packets, so the policies/rules had to applied again. Also added some useful link which i came across. https://community.splun...
See more...
Thanks @livehybrid The issue was at the firewall side. It was dropping packets, so the policies/rules had to applied again. Also added some useful link which i came across. https://community.splunk.com/t5/Security/ERROR-TcpOutputFd-Connection-to-host-c-9997-failed-sock-error/m-p/487895 https://splunk.my.site.com/customer/s/article/Splunk-Universal-Forwarder-is-not-sending-events-to-Splunk-Cloud
Hi @Ciccius This is likely to be an issue with permissions, please could you validate that the permissions within the rest_ta app in $SPLUNK_HOME/etc/apps/rest_ta is the same across your SHC? ...
See more...
Hi @Ciccius This is likely to be an issue with permissions, please could you validate that the permissions within the rest_ta app in $SPLUNK_HOME/etc/apps/rest_ta is the same across your SHC? Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi asimit, thank you very much, it was a permission issue. I don't know why the user/group for app rest_ta was root/root, once I reset to splunk/splunk it worked. Thanks!
Hi @StephenD1 , Yes, Splunk Enterprise 9.4.0 (build 6b4ebe426ca6) is affected by CVE-2024-7264. This vulnerability affects libcurl versions between 7.32.0 and prior to 8.9.1, and as you confirmed,...
See more...
Hi @StephenD1 , Yes, Splunk Enterprise 9.4.0 (build 6b4ebe426ca6) is affected by CVE-2024-7264. This vulnerability affects libcurl versions between 7.32.0 and prior to 8.9.1, and as you confirmed, your installation includes libcurl 7.61.1, which falls within this range. ## Official Fix According to the latest information: - The Splunk fix is identified as SPL-270280 - The fix has been included in Splunk Enterprise 9.4.2 - The fix has also been backported to supported older versions: 9.3.4, 9.2.6, and 9.1.9 ## Recommended Actions ### Option 1: Upgrade to a Patched Version The most comprehensive solution is to upgrade to one of the fixed versions: - Splunk Enterprise 9.4.2 (preferred for your current version) - Or one of the other patched versions (9.3.4, 9.2.6, or 9.1.9) ### Option 2: Disable KVStore (MongoDB) Temporarily If you cannot upgrade immediately, you can consider disabling the KVStore service, which uses MongoDB: 1. Check if any critical apps depend on KVStore: ``` splunk list kvstore -collections ``` 2. Disable KVStore: ``` splunk disable kvstore splunk restart ``` 3. Verify MongoDB is no longer running: ``` ps -ef | grep mongo ``` Note that disabling KVStore will impact any apps that rely on it, including: - Enterprise Security - ITSI - Splunk App for Infrastructure - Some custom apps that use KVStore collections ### Option 3: Mitigate Risk Through Network Controls If you can't upgrade or disable KVStore: - Ensure MongoDB is properly configured to only listen on localhost - Implement additional network controls to restrict access to the MongoDB port (typically 8191) - Monitor for potential exploitation attempts ## Additional Information You can find more details in the Splunk article regarding this vulnerability: https://splunk.my.site.com/customer/s/article/Splunk-vulnerability-libcurl-7-32-0-8-9-1-DoS-CVE-2024-7264 The CVE-2024-7264 is a denial-of-service vulnerability in libcurl that could allow a malicious server to cause a denial of service by sending specially crafted responses that trigger excessive memory consumption. ## Long-term Recommendation For a more permanent solution, plan to upgrade to the patched version as soon as your change management process allows. This is especially important if you have internet-facing Splunk components that might be vulnerable to this exploitation vector. Please give for support happly splunking ....
Hi @Ciccius Based on the error message you're receiving, this appears to be a permissions issue with the rest_ta app. The specific error "Could not find writer for: /nobody/rest_ta/app/install/stat...
See more...
Hi @Ciccius Based on the error message you're receiving, this appears to be a permissions issue with the rest_ta app. The specific error "Could not find writer for: /nobody/rest_ta/app/install/state" suggests that Splunk doesn't have the proper permissions to update the app's state. ## Troubleshooting steps: 1. **Check permissions on the app directory**: ``` sudo ls -la /opt/splunk/etc/apps/rest_ta/ ``` Make sure the directory and files are owned by the Splunk user and group. 2. **Fix permissions if needed**: ``` sudo chown -R splunk:splunk /opt/splunk/etc/apps/rest_ta/ sudo chmod -R 755 /opt/splunk/etc/apps/rest_ta/ ``` 3. **Try disabling the app manually before deployment**: - On the deployment server, edit `/opt/splunk/etc/apps/rest_ta/default/app.conf` - Set `state = disabled` in the `[install]` section - Or completely remove the app if it's not needed: `sudo rm -rf /opt/splunk/etc/apps/rest_ta/` 4. **Check for file system issues**: - The error might indicate file system corruption or disk issues - Run `df -h` to check disk space (you mentioned this is fine) - Run `sudo touch /opt/splunk/etc/test.txt` to verify write permissions to the directory 5. **Validate the deployment server's configuration**: ``` sudo /opt/splunk/bin/splunk show shcluster-bundle-status ``` 6. **Restart Splunk on both servers**: ``` sudo /opt/splunk/bin/splunk restart ``` 7. **Deploy without the problematic app**: - Temporarily move the app out of the deployment directory - Try the deployment again - If successful, the issue is definitely with the app itself If the issue persists, you may need to check Splunk logs for more details: ``` sudo cat /opt/splunk/var/log/splunk/splunkd.log | grep rest_ta ``` Let me know if any of these steps help resolve the issue!
Hi all, I am trying to deploy my apps from the deployment server with the command: /opt/splunk/bin/splunk apply shcluster-bundle -target https://splunksrc:8089 -preserve-lookups true It never fai...
See more...
Hi all, I am trying to deploy my apps from the deployment server with the command: /opt/splunk/bin/splunk apply shcluster-bundle -target https://splunksrc:8089 -preserve-lookups true It never failed to do the task but now I am getting this error: Error while deploying apps to first member, aborting apps deployment to all members: Error while deleting app=rest_ta on target=https://splunksrc:8089: Non-200/201 status_code=500; {"messages":[{"type":"ERROR","text":"\n In handler 'localapps': Cannot update application info: /nobody/rest_ta/app/install/state = disabled: Could not find writer for: /nobody/rest_ta/app/install/state [0] [/opt/splunk/etc]"}]} Both the nodes (deployment and splunksrc) have enough disk space. Any ideas? Thanks Francesco
Hi @sankardevarajan The configuration you provided is for the OnBase application to send logs to Splunk, not for Splunk configuration itself. You need to configure the OnBase application's .config ...
See more...
Hi @sankardevarajan The configuration you provided is for the OnBase application to send logs to Splunk, not for Splunk configuration itself. You need to configure the OnBase application's .config file to send logs to Splunk. The configuration snippet you provided is for the Hyland.Logging component, which is part of the OnBase application. You need to modify the .config file ( likely Application-Server-Web.config or another relevant config file) on the OnBase Application Server to include the specified route. <Route name="Logging_Local_Splunk" >
<add key="Splunk" value="http://your-splunk-heavy-forwarder-or-indexer:8088"/>
<add key="SplunkToken" value="your-splunk-http-event-collector-token"/>
<add key="DisableIPAddressMasking" value="false" />
</Route> To receive these logs in Splunk Cloud, you need to: Set up an HTTP Event Collector (HEC) token in your Splunk Cloud instance. Configure the OnBase application to send logs to the HEC endpoint. In Splunk Cloud, you will need to create an HEC token and get the HEC endpoint URL. You can then use this token and endpoint URL in the OnBase application's .config file. The http://localhost:SplunkPort in the configuration should be replaced with the URL of your Splunk HEC endpoint (typically https://http-inputs-<stackName>.splunkcloud.com ) and SplunkTokenNumber should be replaced with the actual HEC token. For more information on configuring HEC in Splunk Cloud, refer to https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector. For reference the current instructions for creating HEC tokens for Splunk Cloud are: Click Settings > Add Data.
Click monitor.
Click HTTP Event Collector.
In the Name field, enter a name for the token.
(Optional) In the Source name override field, enter a name for a source to be assigned to events that this endpoint generates.
(Optional) In the Description field, enter a description for the input.
(Optional) If you want to enable indexer acknowledgment for this token, click the Enable indexer acknowledgment checkbox.
Click Next.
(Optional) Make edits to source type and confirm the index where you want HEC events to be stored. See Modify input settings.
Click Review.
Confirm that all settings for the endpoint are what you want.
If all settings are what you want, click Submit. Otherwise, click < to make changes.
(Optional) Copy the token value that Splunk Web displays and paste it into another document for reference later.
(Optional) Click Track deployment progress to see progress on how the token has been deployed to the rest of the Splunk Cloud Platform deployment. When you see a status of "Done", you can then use the token to send data to HEC. Ensure that the Splunk HEC endpoint is accessible from the OnBase Application Server. If it's not, you may need to set up a Heavy Forwarder to act as an intermediary. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
i want to onboard application logs into splunk cloud. Hyland.Logging can be configured to send information to Splunk as well as the Diagnostics Console by modifying the .config file of the server. ...
See more...
i want to onboard application logs into splunk cloud. Hyland.Logging can be configured to send information to Splunk as well as the Diagnostics Console by modifying the .config file of the server. To configure Hyland.Logging to send information to Splunk: <Route name="Logging_Local_Splunk" > <add key="Splunk" value="http://localhost:SplunkPort"/> <add key="SplunkToken" value="SplunkTokenNumber"/> <add key="DisableIPAddressMasking" value="false" /> </Route> Configuring Hyland.Logging for Splunk • Application Server • Reader • Product Documentation i am not understanding where we need to configure above config in Splunk. Much appreciated anyone guide me.
Yes. If you don't have "holes" in your firewall to send data directly from the other components to Qradar, it won't work. You might try to use RULESET in props.conf on indexers instead of TRANSFORMS.
@PickleRick SH, CM & LM don't have connectivity to the remote Qradar, only Indexer is configured the send the syslogs to the remote Qradar, so no point to configure syslog in SH, CM and LM right?