All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unfortunately this doesn't help in this scenario as the issue is Data Model Wrangler seeing the shared knowledge objects of other apps, Not the visibility of Data Model Wrangler shared knowledge obje... See more...
Unfortunately this doesn't help in this scenario as the issue is Data Model Wrangler seeing the shared knowledge objects of other apps, Not the visibility of Data Model Wrangler shared knowledge objects
in raw data I have portion that I would like to use in report. "changes":{"description":{"before":"<some text or empty>","after":"<some text or empty>"}}   I created  rex summary= "change... See more...
in raw data I have portion that I would like to use in report. "changes":{"description":{"before":"<some text or empty>","after":"<some text or empty>"}}   I created  rex summary= "changes":\{"description":\{"before":"<some text or empty>","after":"<some text or empty>"\}\})" But it doesn't work. Please advise
The strptime function converts a timestamp from text format into integer (epoch) format.  To convert from one text format into another, use a combination of strptime and strftime (which converts epoc... See more...
The strptime function converts a timestamp from text format into integer (epoch) format.  To convert from one text format into another, use a combination of strptime and strftime (which converts epochs into text). | eval latest_time = strftime(strptime(latest_time, "%Y-%m-%dT%H:%M:%S.%3N%Z"), "%Y-%m-%d %H:%M:%S.%3N%Z")  Or you could use SED to replace the "T" with a space. | rex mode=sed field=latest_time "s/(\d)T(\d)/\1 \2/"
Yes, I suspected that would happen, maybe try: 1. Stop Splunk if you can 2. Backup /opt/splunk/etc/apps folder  (So you have your App configs at least) 3. For your data if you are using the defa... See more...
Yes, I suspected that would happen, maybe try: 1. Stop Splunk if you can 2. Backup /opt/splunk/etc/apps folder  (So you have your App configs at least) 3. For your data if you are using the default in $SPLUNK_HOME/var/lib/splunk folder - you can be move to a temp folder as well, but if you had a seperate volume even better - it wont get touched. 4. Re-install Splunk over the current broken install  and see if that works (I suspect not) but worth a go 5. If it works restore the /opt/splunk/etc/apps folder and your data . (make sure you set the splunk permissions - chown -R splunk:splunk etc to the Splunk and data folders If that all fails, then may be wipe it clean and start again, if you keep it as it is not going to bode well for the future as you will have other upgrades to do in the future  and it will always cause some kind of problem, so better to sort it all out now and make it clean. If it was me, I would start clean again, less issues in the longer run.
Warm and cold buckets can be copied safely while Splunk is running.
Find your Knowledge object and the owner - look at the below example and change as to your requirments. example curl -k -u admin_user:password https://<MY_CLOUD_STACK>splunkcloud.com:8089/servicesN... See more...
Find your Knowledge object and the owner - look at the below example and change as to your requirments. example curl -k -u admin_user:password https://<MY_CLOUD_STACK>splunkcloud.com:8089/servicesNS/nobody/YOU_APP/saved/searches/my_search/acl -d 'owner=new_user'-d 'sharing=global' -X POST Heres some further help on ACL in cloud https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/RESTTUT/RESTbasicexamples
In my index I don't see all the logs being forwarder by the Splunk UF. How can monitor when event is drop from event queue on the Spunk UF. Can I monitor this in Splunk Deployment server?
We are in the midst of a migration from physical servers to virtual servers, and we wonder if stopping Splunk is mandatory in order to perform the Cold Data migration or if there’s a workaround to th... See more...
We are in the midst of a migration from physical servers to virtual servers, and we wonder if stopping Splunk is mandatory in order to perform the Cold Data migration or if there’s a workaround to this and this can be safely done without stopping Splunk.
Can we bump this? I'm running into same issue.
a) Double check in a browser the actual link you need. b) Hard  code a link into the same placed in the dashboard, does that work? c) Print out/display the actual $LinkToken$ that you are appending... See more...
a) Double check in a browser the actual link you need. b) Hard  code a link into the same placed in the dashboard, does that work? c) Print out/display the actual $LinkToken$ that you are appending to the string, what does it look like? d) Compare your manual "I just checked it in a browser" link with the one being generated in Splunk, they will differ somewhere - where are they different? I suspect the above steps will help you find where it's going wrong.    Happy Splunking, Rich
I get the feeling you've somehow overflowed one or both of your counts? Why not split it out temporarily into three pieces - one being "$MA:result.macoscount$", another being "$COSMOS:result.cosmaco... See more...
I get the feeling you've somehow overflowed one or both of your counts? Why not split it out temporarily into three pieces - one being "$MA:result.macoscount$", another being "$COSMOS:result.cosmacount$" then finally the subtraction.  If nothing else it'll help narrow down what's going on!    
Allow me to try to restate what it is you have said - please correct as appropriate! You have syslog coming in to Splunk.  You would like to forward these to another syslog system, in addition to in... See more...
Allow me to try to restate what it is you have said - please correct as appropriate! You have syslog coming in to Splunk.  You would like to forward these to another syslog system, in addition to ingesting them into Splunk.  So devices send syslog to a Splunk heavy forwarder instance, and you'd like that HF to send those incoming syslogs both to Splunk (as cooked data) and to yet another syslog instance (as syslog). Hopefully that sounds like what you are doing. Some questions then - 1) How is the HF receiving syslog?  Directly with the Splunk syslog app, or via some "Regular syslog app" on the system? Also this seems like a lot of work and re-work.  Why can't you just send syslog from the source devices to two separate entities?  And even if you can't, hopefully the answer to the above question is you are using syslog-ng (which I'm positive can duplicate syslog as it comes in) so you can break this problem into two pieces - one of receiving syslog and forwarding it, and another of Splunk reading the files the syslog server creates.
Hello, I want to use SOAR with Splunk Enterprise. The two work together so that I do not buy Splunk ES. Therefore, I want the process to be automatic. I take data from SplunkEnterprise to the soar, a... See more...
Hello, I want to use SOAR with Splunk Enterprise. The two work together so that I do not buy Splunk ES. Therefore, I want the process to be automatic. I take data from SplunkEnterprise to the soar, and the soar performs the actin processes. How is this done? Note: I was using splunk ES, but the process is cumbersome on the one hand. Resources
I'm working with a field named Match_Details.match.properties.user.  It contains domain\user information that I'm trying to split into domain and user.  I can't use EXTRACT in props.conf because of t... See more...
I'm working with a field named Match_Details.match.properties.user.  It contains domain\user information that I'm trying to split into domain and user.  I can't use EXTRACT in props.conf because of this restriction. EXTRACT-<class> = [<regex>|<regex> in <src_field>] NOTE: <src_field> has the following restrictions: * It can only contain alphanumeric characters and underscore (a-z, A-Z, 0-9, and _). Is this also true with REPORT in transforms.conf?  I can't find any documentation that tells me. TIA, Joe
@strive , @th1agarajan - My requirement is similar to this but I don't want daily peak hour. I just need to get peak hour from time range. Lets say, If I am searching for last 7 days data, it needs t... See more...
@strive , @th1agarajan - My requirement is similar to this but I don't want daily peak hour. I just need to get peak hour from time range. Lets say, If I am searching for last 7 days data, it needs to report only one peak hour of whole hours (out of 24*7) . How can I achieve this ? 
Still working on this one.  II uninstalled SplunkForwarder form the Splunk Enterprise server, and that seems to have been a BAD move.  Seems to have caused some config changes and permissions changes... See more...
Still working on this one.  II uninstalled SplunkForwarder form the Splunk Enterprise server, and that seems to have been a BAD move.  Seems to have caused some config changes and permissions changes, and Splunk Enterprise segfaults when I try to start it now. Still trying to work out a fix.
Hi @WILLIAM.GREENE, Your post has received two replies recently. Has either one of them helped? If so, can you click the "Accept as Solution" button? If not, please reply and continue the conversat... See more...
Hi @WILLIAM.GREENE, Your post has received two replies recently. Has either one of them helped? If so, can you click the "Accept as Solution" button? If not, please reply and continue the conversation. 
I want to get an alert when there is switch between events for the first time. Below is the example for this.  index=abc sourcetype=xyz <warning> index=abc sourcetype=xyz <critical> The... See more...
I want to get an alert when there is switch between events for the first time. Below is the example for this.  index=abc sourcetype=xyz <warning> index=abc sourcetype=xyz <critical> These 2 queries I have and I want an alert when there is switch between from <warning> to <critical>. Please help with the query.
Hi @Pablo.Jaña, It seems the community has not jumped in to help. Did you happen to find a solution yourself you can share? If not, you can try contacting AppDynamics Support: How do I submit a ... See more...
Hi @Pablo.Jaña, It seems the community has not jumped in to help. Did you happen to find a solution yourself you can share? If not, you can try contacting AppDynamics Support: How do I submit a Support ticket? An FAQ 
We have the "Reassign Knowledge Objects" option via SplunkCloud portal in the settings but is it possible to do it via API ? We need to do this for all KO's owner by a specific user.