All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hell is on earth 1.please confirm that I can't move everything to local and reupload it?  2. can I clone all alerts from default via gui (is there a mass clone?) and then delete the alerts fro... See more...
hell is on earth 1.please confirm that I can't move everything to local and reupload it?  2. can I clone all alerts from default via gui (is there a mass clone?) and then delete the alerts from default so only clones are left. And how can I rename them later easily so they are not staying with different name but the same? 
The integration itself is working as expected with ServiceNow but I have run several testing scenarios and I am finding some issues that don't seem to have solutions.    1. Map a Splunk Team to Ser... See more...
The integration itself is working as expected with ServiceNow but I have run several testing scenarios and I am finding some issues that don't seem to have solutions.    1. Map a Splunk Team to ServiceNow group - works  2. Change the name of the ServiceNow group (name changes are common) The incident still routes to the correct group but when logging into the mapping settings, Splunk shows the old name and there is no option to edit it. You have to delete the mapping and start over. This is definitely not ideal. 3. Disable the group in ServiceNow to simulate a group that gets decommissioned.  The integration will continue to map to the inactive group which isn't ideal but I can see why that could happen. Again there is no option to modify the mapping or an ability to see that mapping is associated to an inactive group.  4. it doesn't appear as though Splunk will allow for a team to have no members but I wanted to confirm because I wanted to test sending an Incident from ServiceNow to see how Splunk would handle it if the group was mapped to empty team (Ex, all the team members leave the org) If there is no way to keep things in sync this product is going to be very difficult to manage over time and may not be the solution we are looking for. 
Hi Objects stored in the default directory are treated as immutable in Splunk, this isnt specific to Splunk Cloud as its the same for On-Prem (Splunk Enterprise) —they cannot be deleted or modified ... See more...
Hi Objects stored in the default directory are treated as immutable in Splunk, this isnt specific to Splunk Cloud as its the same for On-Prem (Splunk Enterprise) —they cannot be deleted or modified through the UI or REST API, regardless of permissions. Only objects in the local directory can be edited or deleted. There is no supported way to delete or modify knowledge objects (like alerts or dashboards) that reside in default from the UI or API. To remove or update them, you must: Edit or remove the relevant .conf (or dashboards in XML files) stanzas from the default directory in your app package. Repackage and re-upload the app to Splunk Cloud via App Management. Then IF a local version overrides the original (now removed) default, delete this too.e Sorry to be the bearer of bad news, but I think you are going to have to update these ~7k KOs in your app config and then repackage and upload.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @cogh3o  The easiest way for you to achieve this as a one-off is probably to pull the configs from the current On-Premise searchhead and build an app which you can upload to Splunk Cloud as a "pr... See more...
Hi @cogh3o  The easiest way for you to achieve this as a one-off is probably to pull the configs from the current On-Premise searchhead and build an app which you can upload to Splunk Cloud as a "private app" - this means it must pass AppInspect but not the full-blown checks that a Splunkbase app might need to pass in order to be uploaded. The main things you will need to have is a valid app.conf, metadata files and no extras like .DS_Store / Thumbs.db files. You can use SLIM packaging toolkit (https://dev.splunk.com/enterprise/docs/releaseapps/packageapps/packagingtoolkit/) to produce a tarball for your app once you have the files, then I would suggest running this through AppInspect locally to check for any issues before uploading. There are some good docs over at https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/Admin/PrivateApps which guide you through the steps required. Please dont hesitate to reach back out on this thread if you need any further help, or have any questions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
>maxKbps broken w/o the workaround  Same workaround for maxKbps as well. #Turn off a processor [pipeline:indexerPipe] disabled_processors= index_thruput, indexer, indexandforward, latencytrac... See more...
>maxKbps broken w/o the workaround  Same workaround for maxKbps as well. #Turn off a processor [pipeline:indexerPipe] disabled_processors= index_thruput, indexer, indexandforward, latencytracker, diskusage, signing,tcp-output-generic-processor, syslog-output-generic-processor, http-output-generic-processor, stream-output-processor, s2soverhttpoutput, destination-key-processor   
maxKbps was reported few days ago and it will be updated to known issues as well.
@hrawat wrote: maxKbps is calculated from name=thruput. Since it's missing, so maxKbps is not working/applied. Thx. Splunk is certain they will not back port the fix to 9.3.x and 9.4.x? Havin... See more...
@hrawat wrote: maxKbps is calculated from name=thruput. Since it's missing, so maxKbps is not working/applied. Thx. Splunk is certain they will not back port the fix to 9.3.x and 9.4.x? Having per_*_thruput *and* maxKbps broken w/o the workaround seems worthy of a back port. Or at the very least, the "Known Issues" for SPL-263518 should be updated to mention maxKbps not working / applied.
Hi @Cheng2Ready  Its hard to write this without seeing the full search but having an alert fire when its !=1 is very limiting, however you might make it work with something like this below. If ther... See more...
Hi @Cheng2Ready  Its hard to write this without seeing the full search but having an alert fire when its !=1 is very limiting, however you might make it work with something like this below. If there are no results found then you will struggle - so you might need to append an empty |makeresults to ensure that you have atleast 1 event, then you can count the events and check the date: index=xxx earliest=@d latest=now | append [|makeresults] | stats count as event_count | eval Date=strftime(now(),"%Y-%m-%d") | lookup holidays.csv HolidayDate AS Date OUTPUT HolidayDate | eval wd=strftime(now(),"%w") | eval isWeekend=if(wd=="0" OR wd=="6",1,0) | where isWeekend=0 AND isnull(HolidayDate) AND event_count!=2 This will return a single event IF its not a weekend/holiday AND the event_count is 2 - Note this is 2 because we're appending a fake result inase there are zero events returned. If zero are returned then it will still append and result in event_count=1 which will then still fire your alert. You will need to adjust your search to fire when number of results >0 (or !=0)  Does that make sense? Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
maxKbps is calculated from name=thruput. Since it's missing, so maxKbps is not working/applied.
Hi,   I created custom app in cloud so I can migrate all alerts and dashboards from on-prem. I put everything in default as the docs advise and my metadata is like this  # Application-level permis... See more...
Hi,   I created custom app in cloud so I can migrate all alerts and dashboards from on-prem. I put everything in default as the docs advise and my metadata is like this  # Application-level permissions [] access = read : [ * ], write : [ * ] export = system The problem is that even me as admin can't delete dashboards and alerts. I tried to reassign them with knowledge objects to me, nothing works. What I can find in google, is that once everything is in default it's immune to deletion but I can't put it in local as this is also not allowed. Now I exported my app but there is already local dir as users changed some data, now I have to move everything from local to default so I can reupload it?? and what can I do then to be able to delete alerts? we have like 7k alerts and dashboards so it's a nightmare if I have to delete them manually from the conf file and reupload it again. Please, help!
Hi @ws  Let us know how you get on with the Python script. In the meantime - the file you want to edit is: $SPLUNK_HOME/etc/log.cfg (e.g. /opt/splunk/etc/log.cfg) Looks for category.<key> and chan... See more...
Hi @ws  Let us know how you get on with the Python script. In the meantime - the file you want to edit is: $SPLUNK_HOME/etc/log.cfg (e.g. /opt/splunk/etc/log.cfg) Looks for category.<key> and change the default (usually INFO) to DEBUG for those keys. You will need to restart Splunk. Then you should see further info in index=_internal component=<key> which *might* help! This should be on the forwarder picking up the logs. Dont forget to add karma/like any posts which help   Thanks Will
Hi @ganesanvc  If you do "| return FileSource RemoteHost LocalPath RemotePath" then its going to do an AND statement between these fields in your main search - is this what you want? If you want an... See more...
Hi @ganesanvc  If you do "| return FileSource RemoteHost LocalPath RemotePath" then its going to do an AND statement between these fields in your main search - is this what you want? If you want an "OR" then I think you might want to do: [| makeresults | eval text_search="*$text_search$*" | eval escaped=replace(text_search, "\\", "\\\\") | eval FileSource=escaped, RemoteHost=escaped, LocalPath=escaped, RemotePath=escaped | table FileSource RemoteHost LocalPath RemotePath | format "(" "(" "OR" ")" "OR" ")" ]   This will create something like: ( ( FileSource="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" OR LocalPath="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" OR RemoteHost="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" OR RemotePath="\\\\Test\\\\abc\\\\test\\\\abc\\\\xxx\\\\OUT\\\\" ) ) Note - I am not 100% sure how many \\ you are expecting, but when I ran your makeresults search it failed and I had to escape the the replace as: | eval escaped=replace(text_search, "\\\\", "\\\\\\\\") You can run the makeresults on its own and substitute your token to validate the output you get and ensure the search works correctly.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I have instrumented a Kubernetes cluster in a test environment.  I have also instrumented a java application within that cluster.  Metrics are reporting for both.  However, within APM, when clicking ... See more...
I have instrumented a Kubernetes cluster in a test environment.  I have also instrumented a java application within that cluster.  Metrics are reporting for both.  However, within APM, when clicking at infrastructure at the bottom of the screen, I get no data as if I have no infrastructure configured.  What configuration am I missing to correlate the data between the two?   Clicking Infrastructure Under an Instrumented APM Service
My search query: Index=xxx <xxxxxxx> |eval Date=strftime(_time,"%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output HolidayDate | eval should_alert=if((isnull(HolidayDate)), "Yes", "No"... See more...
My search query: Index=xxx <xxxxxxx> |eval Date=strftime(_time,"%Y-%m-%d") | lookup holidays.csv HolidayDate as Date output HolidayDate | eval should_alert=if((isnull(HolidayDate)), "Yes", "No") | table Date should_alert | where should_alert="Yes" So I've been trying to create an complicated alert. unfortunately it failed, and is looking for guidance. The Alert is setup is supposed to fire if there are no results OR more than 1 unless it's the day after a weekend or holiday, in which case, this is to achieve the alert to look for 0 results OR  anything other than 1 I've added below the following: Trigger conditions: Number of results is not equal to 1 so when a date appears on the Muted date(holiday.csv) I want. turns out it had 0 events that day. and the 0 events/results triggered the alert and fired on Easter date. Also when we Mute a dates does it make it return 0 events? so technically it will still fire on the dates due to my trigger condition, how can we make sure it mutes on the holiday.csv lookup file , and yet alert on 0 events that are not on the holiday.csv
@hrawat wrote: Note: As a side effect of this issue, maxKbps(limits.conf) will also be impacted as it requires thruput metrics to function. Can you elaborate on how maxKbps is impacted?
Does the script send to HEC or write to a file? If HEC - which endpoint?
@PickleRick A Python script is designed to establish a connection with the Oracle database, extract data from designated tables, and forward the retrieved data into Splunk for ingestion.
Wait a second. File or table? What kind of source does this data come from? Monitor input? Dbconnect? Have you checked the actual data with someone responsible for the source? I mean whether the ID ... See more...
Wait a second. File or table? What kind of source does this data come from? Monitor input? Dbconnect? Have you checked the actual data with someone responsible for the source? I mean whether the ID or whatever it is in your data corresponds to the right timestamp?
@livehybrid Both the servers are in the same timezone. As I already compared the timezone setting on Project A and B.
Hi , I need to move all my knowledge onjects including dashboards,Alerts ,savedsearches and lookups etc to cloud SH from onprem SH. Help me out on this please. can i move each app configs to cloud ... See more...
Hi , I need to move all my knowledge onjects including dashboards,Alerts ,savedsearches and lookups etc to cloud SH from onprem SH. Help me out on this please. can i move each app configs to cloud SH so that way i will be able to replicate the same knowledge objects . we are done with Data migration, now we need to move app and knowledge objects.