All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Cole-Potter , you need a Splunk Enterprise Licernse, the dimension depends on the volume of the indexed logs. For this reason I hint to avoid to locally index. Ciao. Giuseppe
Try Ingest Actions.  They're easy to use and even have a preview GUI so you know they'll work before they're implemented.
Sorry about the week late reply but that does not seem to work. I am still getting logs that i dont need i just disabled ingestion from that folder location. Does splunk have any app that would filte... See more...
Sorry about the week late reply but that does not seem to work. I am still getting logs that i dont need i just disabled ingestion from that folder location. Does splunk have any app that would filter data easier than creating the transforms and props.conf files? 
Hi @livehybrid , Thanks for the reply. There's no return from the command actually. I tried -v and it doesn't show anything unusual, just waiting for the response from API. The curl command I use as... See more...
Hi @livehybrid , Thanks for the reply. There's no return from the command actually. I tried -v and it doesn't show anything unusual, just waiting for the response from API. The curl command I use as this :   curl --location 'https://api.eu0.signalfx.com/v2/synthetics/tests/api/try_now?locationId=aws-eu-central-1' \ --header 'Content-Type: application/json' \ --header 'X-SF-TOKEN: XXXXXXXXXXXXX' \ --data '{ "test": { "name": "DOCID Check - Deployment Copy", "active": false, "frequency": 5, "schedulingStrategy": "round_robin", "locationIds": [ "aws-eu-central-1" ], "automaticRetries": 0, "customProperties": [ { "key": "group", "value": "DOCID" } ], "deviceId": 1, "requests": [ { "configuration": { "name": "Get Auth token", "url": "https://AAAAAA.XXXXXX.com/api/v3/authentication", "requestMethod": "POST", "headers": { "Content-Type": "application/json" }, "body": "{\n \"userName\": \"{{custom.user_name}}\",\n \"password\": \"{{custom.user_password}}\"\n}" }, "setup": [ { "code": "{{env.prod_user}}", "name": "JavaScript run", "type": "javascript", "variable": "user_name" }, { "code": "{{env.prod_user_password}}", "name": "JavaScript run", "type": "javascript", "variable": "user_password" } ], "validations": [ { "name": "Extract from response body", "type": "extract_json", "source": "{{response.body}}", "variable": "session_token", "extractor": "$.token" }, { "name": "Assert response code equals 201", "type": "assert_numeric", "actual": "{{response.code}}", "expected": "201", "comparator": "equals" } ] }, { "configuration": { "name": "Search by DOCID", "url": "https://tttt.xxxxx.com/api/Search?search=docid==11111111111", "requestMethod": "GET", "headers": { "Authorization": "Bearer {{custom.session_token}}" }, "body": null }, "setup": [], "validations": [ { "name": "Extract from response body", "type": "extract_json", "source": "{{response.body}}", "variable": "tripreference", "extractor": "$.trips.tripReference" } ] } ] } }'   Some json files works if I empty validation or only 1 request. Also empty json returns 404 error with some comments. But I need to get the test first and send it back with some modification in bash scripting.
Please provide further detail of what you have and what you are trying to achieve
It is a dasboard query which is used linked other queries as well  
First off thank you for the response, we are sub 50 clients currently on the deployment server but very helpful information if we decide to expand.  I probably should have been a little more specif... See more...
First off thank you for the response, we are sub 50 clients currently on the deployment server but very helpful information if we decide to expand.  I probably should have been a little more specific regarding the alert. I am leveraging Splunk cloud for email alert, I would like to be able to index logs locally and forward them because I would like to be able to kick off local scripts on hosts which I was under the assumption would have to be local to the network.  It would be limited inputs I would want to do this with like sub 5 hosts.  Do you know what kind of licensing is required to index with Splunk Enterprise on prim? 
Hi @iduran  Do you get any specific error back from Curl? Are you able to share the full curl command (redacted) ? If you havent done already, you could try adding '-v' to the curl command to get a... See more...
Hi @iduran  Do you get any specific error back from Curl? Are you able to share the full curl command (redacted) ? If you havent done already, you could try adding '-v' to the curl command to get a more verbose output.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Chayan19  Unfortunately I dont think there are any apps currently in Splunkbase to achieve this, as @PickleRick you might have some success with HTTP Alert Action - *however* I believe that the ... See more...
Hi @Chayan19  Unfortunately I dont think there are any apps currently in Splunkbase to achieve this, as @PickleRick you might have some success with HTTP Alert Action - *however* I believe that the OneDrive API requires authentication using OAuth2.0 which I dont think you will be able to do with that approach.  The only thing I can think of is using the "Export Everything" app which can send to "Azure Blob & Data Lake Object Storage" - From here you'd be within the Azure ecosystem so may be able to use a service account to push it to OneDrive via a function? Might become a little complicated though! Other than that I think it'd need to be a custom Python alert action which would need developing. Sorry I couldnt be of any more help!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have run into that issue before in configuring CAC/Token login.  I can't remember if this was the reason, but make sure in the authentication.conf the host field for LDAP server needs to be the FQD... See more...
I have run into that issue before in configuring CAC/Token login.  I can't remember if this was the reason, but make sure in the authentication.conf the host field for LDAP server needs to be the FQDN of the server and not the IP address.
Hi @harihara  If you want to search back the last 24 hours then you can just replace the existing "-75m@m" with "-1440m@m", or "-1d" for the last 24 hours from now, or "-1d@m" which is the same as 1... See more...
Hi @harihara  If you want to search back the last 24 hours then you can just replace the existing "-75m@m" with "-1440m@m", or "-1d" for the last 24 hours from now, or "-1d@m" which is the same as 1440 but just easier to read. Are you having any issues when trying to replace the -75m@m  with "-1440m@m"?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
index =prd-Thailand sourcetype=abc-app-log earliest=-75m@m latest=now |table a, b,c ,d ,e, f |where a=1324 b=345 |stats count as volume Question is how to replace earliest=-1440m@m Please let me... See more...
index =prd-Thailand sourcetype=abc-app-log earliest=-75m@m latest=now |table a, b,c ,d ,e, f |where a=1324 b=345 |stats count as volume Question is how to replace earliest=-1440m@m Please let me know if any more details required
It's better to do it the other way around without using append and having a subsearch which has its limitations. index=sw tag=MemberServers sourcetype="windows PFirewall Log" | stats count by source... See more...
It's better to do it the other way around without using append and having a subsearch which has its limitations. index=sw tag=MemberServers sourcetype="windows PFirewall Log" | stats count by sourcetype,host | inputlookup append=t myhosts.csv | stats values(sourcetype) as not_missing by host | where isnull(not_missing)
https://help.splunk.com/en/splunk-enterprise/administer/inherit-a-splunk-deployment/9.3/inherited-deployment-tasks/components-and-their-relationship-with-the-network
Merged both threads.
You could try to use HTTP Alert Action to push the report with HTTP REST API according to https://learn.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_put_content?view=odsp-graph-onlin... See more...
You could try to use HTTP Alert Action to push the report with HTTP REST API according to https://learn.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_put_content?view=odsp-graph-online But I've never tried it myself.
As usual - it depends. The more parallel indexing pipelines you have, the higher theoretical possible indexing throuhghput (but the grow isn't linear). But also you're "binding" more CPUs on your ind... See more...
As usual - it depends. The more parallel indexing pipelines you have, the higher theoretical possible indexing throuhghput (but the grow isn't linear). But also you're "binding" more CPUs on your indexers. Remember that each pipeline uses 4-6CPUs. It's always the balance between indexing performance and search performance. Reducing indexing performance (because that's what removing pipelines is) will leave you with more performance for searching but yes, if you're close to the edge, it might result in clogging the input. Unfortunately, I don't know of a 100% sure way to tell whether you can drop one more pipeline. You can check the reports on indexing->Perormance: Advanced screen in your MC to see if you're loaded to the brim or not yet, but that's still only an "educated guess". It's usually the other way around - if you have spare CPUs, you add more pipelines. I don't recall ever removing pipelines in a busy production environment.
Hi @kaeleyt  I wonder if the following will help work out what is going on? Can you run this to see if this shows the resultCount=0 or any other issues? You might need to tweak: | rest splunk_serv... See more...
Hi @kaeleyt  I wonder if the following will help work out what is going on? Can you run this to see if this shows the resultCount=0 or any other issues? You might need to tweak: | rest splunk_server=local /servicesNS/nobody/my_app_name/saved/searches/ScheduledReportA/history | table updated, published, eventCount, is* id | rex field=id "(?<uri>\/services.*)$" | map maxsearches=10 search="|rest $uri$ " | table id dispatchState eventCount resultCount ttl is*    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
How can I automate the process of exporting a Splunk report and uploading it to a OneDrive link? Does anyone have experience or suggestions on how to achieve this?
Hi everyone, I'm new to Splunk Cloud, and trying to implement test runs for post deployment in our CI/CD pipelines. We have many Tests in Synthetics and want to use them after the deployments, so th... See more...
Hi everyone, I'm new to Splunk Cloud, and trying to implement test runs for post deployment in our CI/CD pipelines. We have many Tests in Synthetics and want to use them after the deployments, so that we can understand everything went well. My problem is that, I make an API call to  /tests/api/try_now   from Postman with json body (test) and it works perfectly, but when I make the same call with  CURL  it hangs. I used this documentation :  https://dev.splunk.com/observability/reference/api/synthetics_api_tests/latest#endpoint-createtrynowapitest  I tried many versions of the test json, sometimes it works with only one resource in it, sometimes it works without validation. My request test json is created automatically from an existing test, so I don't want to change it. What can be the problem that it works with Postman but not cURL? Any help is appreciated. Regards, ilker