All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @livehybrid  Could have sworn I did but now its working. Thanks for your help.   KR
Hi Team, Proxy connectivity test for WHOIS RDP is failing on SPLUNK SOAR UI.  Testing Connectivity   App 'WHOIS RDAP' started successfully (id: 1745296324951) on asset: 'asset_whoisrdp'(id: 22) ... See more...
Hi Team, Proxy connectivity test for WHOIS RDP is failing on SPLUNK SOAR UI.  Testing Connectivity   App 'WHOIS RDAP' started successfully (id: 1745296324951) on asset: 'asset_whoisrdp'(id: 22) Loaded action execution configuration Querying... Whois query failed. Error: HTTP lookup failed for https://rdap.arin.net/registry/ip/8.8.8.8. No action executions found.   I have configured proxy setting at GLBOBAL Environment level. how to fix this issue.  
Use iplocation or geostats to display within a range of 100 kilometers (with longitude of 0.89 degrees and latitude of 0.91 degrees) which regions' people have logged in, the login time, IP address, ... See more...
Use iplocation or geostats to display within a range of 100 kilometers (with longitude of 0.89 degrees and latitude of 0.91 degrees) which regions' people have logged in, the login time, IP address, and the login method.
I first tried exporting and importing the add-on after I moved to version 4.3.0 of the add-on builder. I then tried replacing the splunklib folders in the add-on builder and the app that is failing v... See more...
I first tried exporting and importing the add-on after I moved to version 4.3.0 of the add-on builder. I then tried replacing the splunklib folders in the add-on builder and the app that is failing validation with one from the latest sdk release. I also used "pip install splunk-sdk" and "python setup.py install" to attempt to update the sdk. None of the changes seem to have had any effect. 
Hello, How to display JSON tree structure in a summary index without output_mode=hec? I am not a Splunk admin. So, the only way I created summary index was using a Splunk report. I then enabled... See more...
Hello, How to display JSON tree structure in a summary index without output_mode=hec? I am not a Splunk admin. So, the only way I created summary index was using a Splunk report. I then enabled "Schedule Report" and "Summary Indexing". When the report ran, it appended the search query with the "| summaryindex" syntax. (See the screenshot below showing the steps). The summary index query is: index=summary   report=test_1  (the report field is to differentiate with the other users) I tried | collect index=summary   name=test_1   output_mode=hec, the result DID NOT show up in the summary index I tried  | collect index=summary marker="hostname=\"https://a1.test.com/\",report=\"test_1\"", the result DID show up in the summary index, but without JSON tree structure I tried  | collect index=summary marker="hostname=\"https://a1.test.com/\",report=\"test_1\""  output_mode=hec,  I received an "invalid argument". This is likely because marker parameter is not compatible with output_mode=hec. I believe only output_mode raw is allowed. However, I accidentally and successfully created a summary index and displayed it as a JSON tree structure by accident.  I don't know what I did. Please suggest. Thank you so much Step to create summary index 1) Created a Splunk Report, edited the search, and enabled schedule 2) Enabled summary indexing After the report Ran, it added | summaryindex syntax   Here's the Search query | windbag | head 1 | eval _raw="{\"name\":\"John Doe\",\"age\":30,\"address\":{\"street\":\"123 Main St\",\"city\":\"Anytown\",\"state\":\"CA\",\"zip\":\"12345\"},\"interests\":[\"reading\",\"hiking\",\"coding\"]}"   The search result using "List"   When I clicked show syntax highlighted, it showed JSON tree structure Expected result      
Hi I have a 2 site architecture Site 1 - 2 indexers, 2 ES SH Site 2 - 2 indexers, 1ES SH All of them are in clusters.I wish to have 1 copy per site . What should be my RF and SF?  Can you also s... See more...
Hi I have a 2 site architecture Site 1 - 2 indexers, 2 ES SH Site 2 - 2 indexers, 1ES SH All of them are in clusters.I wish to have 1 copy per site . What should be my RF and SF?  Can you also suggest the min rf and sf configuration.   
hell is on earth 1.please confirm that I can't move everything to local and reupload it?  2. can I clone all alerts from default via gui (is there a mass clone?) and then delete the alerts fro... See more...
hell is on earth 1.please confirm that I can't move everything to local and reupload it?  2. can I clone all alerts from default via gui (is there a mass clone?) and then delete the alerts from default so only clones are left. And how can I rename them later easily so they are not staying with different name but the same? 
The integration itself is working as expected with ServiceNow but I have run several testing scenarios and I am finding some issues that don't seem to have solutions.    1. Map a Splunk Team to Ser... See more...
The integration itself is working as expected with ServiceNow but I have run several testing scenarios and I am finding some issues that don't seem to have solutions.    1. Map a Splunk Team to ServiceNow group - works  2. Change the name of the ServiceNow group (name changes are common) The incident still routes to the correct group but when logging into the mapping settings, Splunk shows the old name and there is no option to edit it. You have to delete the mapping and start over. This is definitely not ideal. 3. Disable the group in ServiceNow to simulate a group that gets decommissioned.  The integration will continue to map to the inactive group which isn't ideal but I can see why that could happen. Again there is no option to modify the mapping or an ability to see that mapping is associated to an inactive group.  4. it doesn't appear as though Splunk will allow for a team to have no members but I wanted to confirm because I wanted to test sending an Incident from ServiceNow to see how Splunk would handle it if the group was mapped to empty team (Ex, all the team members leave the org) If there is no way to keep things in sync this product is going to be very difficult to manage over time and may not be the solution we are looking for. 
Hi Objects stored in the default directory are treated as immutable in Splunk, this isnt specific to Splunk Cloud as its the same for On-Prem (Splunk Enterprise) —they cannot be deleted or modified ... See more...
Hi Objects stored in the default directory are treated as immutable in Splunk, this isnt specific to Splunk Cloud as its the same for On-Prem (Splunk Enterprise) —they cannot be deleted or modified through the UI or REST API, regardless of permissions. Only objects in the local directory can be edited or deleted. There is no supported way to delete or modify knowledge objects (like alerts or dashboards) that reside in default from the UI or API. To remove or update them, you must: Edit or remove the relevant .conf (or dashboards in XML files) stanzas from the default directory in your app package. Repackage and re-upload the app to Splunk Cloud via App Management. Then IF a local version overrides the original (now removed) default, delete this too.e Sorry to be the bearer of bad news, but I think you are going to have to update these ~7k KOs in your app config and then repackage and upload.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @cogh3o  The easiest way for you to achieve this as a one-off is probably to pull the configs from the current On-Premise searchhead and build an app which you can upload to Splunk Cloud as a "pr... See more...
Hi @cogh3o  The easiest way for you to achieve this as a one-off is probably to pull the configs from the current On-Premise searchhead and build an app which you can upload to Splunk Cloud as a "private app" - this means it must pass AppInspect but not the full-blown checks that a Splunkbase app might need to pass in order to be uploaded. The main things you will need to have is a valid app.conf, metadata files and no extras like .DS_Store / Thumbs.db files. You can use SLIM packaging toolkit (https://dev.splunk.com/enterprise/docs/releaseapps/packageapps/packagingtoolkit/) to produce a tarball for your app once you have the files, then I would suggest running this through AppInspect locally to check for any issues before uploading. There are some good docs over at https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/Admin/PrivateApps which guide you through the steps required. Please dont hesitate to reach back out on this thread if you need any further help, or have any questions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
>maxKbps broken w/o the workaround  Same workaround for maxKbps as well. #Turn off a processor [pipeline:indexerPipe] disabled_processors= index_thruput, indexer, indexandforward, latencytrac... See more...
>maxKbps broken w/o the workaround  Same workaround for maxKbps as well. #Turn off a processor [pipeline:indexerPipe] disabled_processors= index_thruput, indexer, indexandforward, latencytracker, diskusage, signing,tcp-output-generic-processor, syslog-output-generic-processor, http-output-generic-processor, stream-output-processor, s2soverhttpoutput, destination-key-processor   
maxKbps was reported few days ago and it will be updated to known issues as well.
@hrawat wrote: maxKbps is calculated from name=thruput. Since it's missing, so maxKbps is not working/applied. Thx. Splunk is certain they will not back port the fix to 9.3.x and 9.4.x? Havin... See more...
@hrawat wrote: maxKbps is calculated from name=thruput. Since it's missing, so maxKbps is not working/applied. Thx. Splunk is certain they will not back port the fix to 9.3.x and 9.4.x? Having per_*_thruput *and* maxKbps broken w/o the workaround seems worthy of a back port. Or at the very least, the "Known Issues" for SPL-263518 should be updated to mention maxKbps not working / applied.
Hi @Cheng2Ready  Its hard to write this without seeing the full search but having an alert fire when its !=1 is very limiting, however you might make it work with something like this below. If ther... See more...
Hi @Cheng2Ready  Its hard to write this without seeing the full search but having an alert fire when its !=1 is very limiting, however you might make it work with something like this below. If there are no results found then you will struggle - so you might need to append an empty |makeresults to ensure that you have atleast 1 event, then you can count the events and check the date: index=xxx earliest=@d latest=now | append [|makeresults] | stats count as event_count | eval Date=strftime(now(),"%Y-%m-%d") | lookup holidays.csv HolidayDate AS Date OUTPUT HolidayDate | eval wd=strftime(now(),"%w") | eval isWeekend=if(wd=="0" OR wd=="6",1,0) | where isWeekend=0 AND isnull(HolidayDate) AND event_count!=2 This will return a single event IF its not a weekend/holiday AND the event_count is 2 - Note this is 2 because we're appending a fake result inase there are zero events returned. If zero are returned then it will still append and result in event_count=1 which will then still fire your alert. You will need to adjust your search to fire when number of results >0 (or !=0)  Does that make sense? Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
maxKbps is calculated from name=thruput. Since it's missing, so maxKbps is not working/applied.
Hi,   I created custom app in cloud so I can migrate all alerts and dashboards from on-prem. I put everything in default as the docs advise and my metadata is like this  # Application-level permis... See more...
Hi,   I created custom app in cloud so I can migrate all alerts and dashboards from on-prem. I put everything in default as the docs advise and my metadata is like this  # Application-level permissions [] access = read : [ * ], write : [ * ] export = system The problem is that even me as admin can't delete dashboards and alerts. I tried to reassign them with knowledge objects to me, nothing works. What I can find in google, is that once everything is in default it's immune to deletion but I can't put it in local as this is also not allowed. Now I exported my app but there is already local dir as users changed some data, now I have to move everything from local to default so I can reupload it?? and what can I do then to be able to delete alerts? we have like 7k alerts and dashboards so it's a nightmare if I have to delete them manually from the conf file and reupload it again. Please, help!
Hi @ws  Let us know how you get on with the Python script. In the meantime - the file you want to edit is: $SPLUNK_HOME/etc/log.cfg (e.g. /opt/splunk/etc/log.cfg) Looks for category.<key> and chan... See more...
Hi @ws  Let us know how you get on with the Python script. In the meantime - the file you want to edit is: $SPLUNK_HOME/etc/log.cfg (e.g. /opt/splunk/etc/log.cfg) Looks for category.<key> and change the default (usually INFO) to DEBUG for those keys. You will need to restart Splunk. Then you should see further info in index=_internal component=<key> which *might* help! This should be on the forwarder picking up the logs. Dont forget to add karma/like any posts which help   Thanks Will
Hi @ganesanvc  If you do "| return FileSource RemoteHost LocalPath RemotePath" then its going to do an AND statement between these fields in your main search - is this what you want? If you want an... See more...
Hi @ganesanvc  If you do "| return FileSource RemoteHost LocalPath RemotePath" then its going to do an AND statement between these fields in your main search - is this what you want? If you want an "OR" then I think you might want to do: [| makeresults | eval text_search="*$text_search$*" | eval escaped=replace(text_search, "\\", "\\\\") | eval FileSource=escaped, RemoteHost=escaped, LocalPath=escaped, RemotePath=escaped | table FileSource RemoteHost LocalPath RemotePath | format "(" "(" "OR" ")" "OR" ")" ]   This will create something like: ( ( FileSource="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" OR LocalPath="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" OR RemoteHost="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" OR RemotePath="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" ) ) Note - I am not 100% sure how many \\ you are expecting, but when I ran your makeresults search it failed and I had to escape the the replace as: | eval escaped=replace(text_search, "\\\\", "\\\\\\\\") You can run the makeresults on its own and substitute your token to validate the output you get and ensure the search works correctly.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I have instrumented a Kubernetes cluster in a test environment.  I have also instrumented a java application within that cluster.  Metrics are reporting for both.  However, within APM, when clicking ... See more...
I have instrumented a Kubernetes cluster in a test environment.  I have also instrumented a java application within that cluster.  Metrics are reporting for both.  However, within APM, when clicking at infrastructure at the bottom of the screen, I get no data as if I have no infrastructure configured.  What configuration am I missing to correlate the data between the two?   Clicking Infrastructure Under an Instrumented APM Service
My search query: Index=xxx <xxxxxxx> |eval Date=strftime(_time,"%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output HolidayDate | eval should_alert=if((isnull(HolidayDate)), "Yes", "No"... See more...
My search query: Index=xxx <xxxxxxx> |eval Date=strftime(_time,"%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output HolidayDate | eval should_alert=if((isnull(HolidayDate)), "Yes", "No") | table Date should_alert | where should_alert="Yes" So I've been trying to create an complicated alert. unfortunately it failed, and is looking for guidance. The Alert is setup is supposed to fire if there are no results OR more than 1 unless it's the day after a weekend or holiday, in which case, this is to achieve the alert to look for 0 results OR  anything other than 1 I've added below the following: Trigger conditions: Number of results is not equal to 1 so when a date appears on the Muted date(holiday.csv) I want. turns out it had 0 events that day. and the 0 events/results triggered the alert and fired on Easter date. Also when we Mute a dates does it make it return 0 events? so technically it will still fire on the dates due to my trigger condition, how can we make sure it mutes on the holiday.csv lookup file , and yet alert on 0 events that are not on the holiday.csv