All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

"Forwarding" means exactly what is happening - you receive an event on input, you send it to output(s). That's it. "Mirroring" of already existing data would be achievable only in cluster setup. Wit... See more...
"Forwarding" means exactly what is happening - you receive an event on input, you send it to output(s). That's it. "Mirroring" of already existing data would be achievable only in cluster setup. Without a cluster you can only migrate the data offline.
To clarify, my container has a restart policy of "unless-stopped", so when the container exits after the failed ansible task, docker is restarting it. If you run it without that policy, it will run o... See more...
To clarify, my container has a restart policy of "unless-stopped", so when the container exits after the failed ansible task, docker is restarting it. If you run it without that policy, it will run one, fail the ansible task, and exit.
Thanks for linking that article, I haven't seen it and it's got some handy tips.  1) Yes. In my limited testing, this works. 3) Unfortunately, that seems to be the tradeoff based on what you're try... See more...
Thanks for linking that article, I haven't seen it and it's got some handy tips.  1) Yes. In my limited testing, this works. 3) Unfortunately, that seems to be the tradeoff based on what you're trying to do. When you filter (rest/artifact) you're look for any artifacts which match your search results. When you request object detail, (rest/artifact/5/name) you're restricting your results to artifact 5 specifically.  If you want to give an example of your specific flow, we can probably come up with a more detailed answer. I'm guessing you're going to want something roughly along these lines: /rest/artifact?_filter_cef__destinationAddress={SNOW INC CI}&_filter_status="new"&page_size=0   Unfortunately, I don't think you'll be able to avoid looping through the results one way or another.  
Hi usually this should do when you are migrating to new server. There are couple of old posts how this can do e.g. https://community.splunk.com/t5/Installation/Upgrading-and-migrating-to-a-new-host-... See more...
Hi usually this should do when you are migrating to new server. There are couple of old posts how this can do e.g. https://community.splunk.com/t5/Installation/Upgrading-and-migrating-to-a-new-host-how-to-migrate-large/td-p/601048 I’m afraid that your new server have all same (named) indexes as old node have? Also they have starting index numbering from start (0 or 1). This leads that you could have equally named buckets in both nodes (at least hot buckets). For that reason you couldn’t do rsync without losing some events or you are in situation when splunk cannot start. https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/HowSplunkstoresindexes#What_the_index_directories_look_like see bucket names. In theory there are probably two or three way to do it.  Create a new clustered indexers add/migrate those both nodes there, then transfer all buckets into one node and then go back to single node or keep it as one node cluster. Transfer old indexes into new node but change index names and later use both indexes when you are make queries and/or create eventtypes for those. Transfer old indexes to new node and ensure that there haven’t been any bucket name collisions. You must rename those directories before transfer. You also must update <index name>.dat file to highest <local id>+1 before starting nod again. It’s mandatory to test these scenarios before you do it with your production nodes! Also take offline backups before transfer, so you could rollback if/when needed. I haven’t try any of these as I have always migrate the data into new node before switchover. Also ensure that you have same splunk version on both node! If I must do this I probably chose 1st option? r. Ismo  
It looks like the container is continually restarting. It fails that last task, aborts, and the container restarts. Splunk UF does start up as I see logs from the container in my lab's _internal inde... See more...
It looks like the container is continually restarting. It fails that last task, aborts, and the container restarts. Splunk UF does start up as I see logs from the container in my lab's _internal index. This looks to have changed ~7 days ago. This appears to be a broken image. Tags latest, 9.4, 9.3, 9.2, etc. Tag 9.3.2 from 4 months ago works as expected. https://hub.docker.com/r/splunk/universalforwarder/tags
I think you missed the part @kiran_panchavat where @samuel-devops said splunk is up running fine. For what it's worth, I've experienced the same thing with tags latest/9.4, 9.3, and 9.2. That last... See more...
I think you missed the part @kiran_panchavat where @samuel-devops said splunk is up running fine. For what it's worth, I've experienced the same thing with tags latest/9.4, 9.3, and 9.2. That last task (check_for_required_restarts) fails, but everything seems to start up fine. I will point out that this is new behavior. tag 9.3.2 for example is 4 months old and finishes it's ansible "init" as expected.
Hello, I am attempting to forward data from an older indexer to a new indexer so that I can decommission the server the old indexer currently sits on. These indexers are not currently clustered, and... See more...
Hello, I am attempting to forward data from an older indexer to a new indexer so that I can decommission the server the old indexer currently sits on. These indexers are not currently clustered, and the old is set up to forward to the new (so the indexes are all mirrored), but this was only sending new data, not any of the previously indexed data on the old. What are my options? Am I able to forcibly forward the old data to the new? Do I need to manually sync the old data and the new by passing the old buckets to the new indexer? Ideally I'd like to migrate the data over time (there's a fair amount), but in my research so far that doesn't appear feasible.
Is there a reason you're trying to manually change open_time? If you change from new to in progress, SOAR will set that field on its own. This also applies if you bump it back to new then change to i... See more...
Is there a reason you're trying to manually change open_time? If you change from new to in progress, SOAR will set that field on its own. This also applies if you bump it back to new then change to in progress again.
Unfortunately, monitoring users is not an easy task. There are huge challenges with it on every system I know - you either get too little data and it's easy to circumvent the auditing proces or you'r... See more...
Unfortunately, monitoring users is not an easy task. There are huge challenges with it on every system I know - you either get too little data and it's easy to circumvent the auditing proces or you're getting way too much data and it's hard to make anything of it. That's why there's a relatively big market for third-party solution - either agent-based or working as man-in-the-middle external component. If you're ok with the limitations, you could use the PROMPT_COMMAND method (but again - I wouldn't use world-writeable files), add to it some auditd logging and you have... something. Oh, and be very careful with using the term "realtime".
You simply can't. A regex matches a pattern. The pattern is static. It can contain some "recursive" elements but you can't put something like "today's date" as part of the pattern.
I managed to get this sorted out. With the way SOAR handles IDs, it's easiest to update workbooks as a whole than trying to focus on specific phases or tasks. To that end, when pulling your workbooks... See more...
I managed to get this sorted out. With the way SOAR handles IDs, it's easiest to update workbooks as a whole than trying to focus on specific phases or tasks. To that end, when pulling your workbooks, you'll want to get [base_url]/workbook_template?page_size=0  [base_url]/workbook_phase_template?page_size=0. Since they're stored separately you'll then want to stich the workbooks and phases together using the phases' template value. This is the same as its parent workbook's ID. Including some fields in your push can cause SOAR to reject the changes (usually with a 404 error). Workbooks (the top level): name, description, is_default, is_note_required, phases.  Phases (the middle level): name, order, and tasks. Tasks (bottom level): name, description, order, owner, role, sla, and suggestions. Delete I overcomplicated this for myself. A simple REST delete request to [base_url]/rest/workbook_template/[ID] will delete the workbook. Create Post your json with the required fields to [base_url]/rest/workbook_template. It's important to note there is no ID. Update Post your full json with the required fields for the workbook you're changing to [base_url]/rest/workbook_template/[ID]. SOAR is intelligent enough to recognize what the changes are and just update those pieces.
Thanks Will for your prompt response. Worked with Splunk Support. Unfortunately, Splunk Support cannot add the TA to cloud because it has not been verified for cloud usage. They also had no means of... See more...
Thanks Will for your prompt response. Worked with Splunk Support. Unfortunately, Splunk Support cannot add the TA to cloud because it has not been verified for cloud usage. They also had no means of identifying who were the developers to work with for vetting, and there isn't a link to a repo site for the app. We have similar thoughts you mention-- we are investigating adding the TA to one of our forwarders. However, we are looking for a more permanent solution.  The TA when it worked on Cloud took advantage of the HEC. The API keys were easily maintained through the TA, and it was centrally located within the HEC UI console. Having it centrally managed through cloud increased accessibility to cloud admins for support (which, in my organization's staffing model, is a larger group than the admins supporting the forwarder.) 
Hi @gcusello I am working on this exact query. The problem is that I do not get any results even though I have devices reporting only uninstall event code, which is 11724. The appname is being extrac... See more...
Hi @gcusello I am working on this exact query. The problem is that I do not get any results even though I have devices reporting only uninstall event code, which is 11724. The appname is being extracted correctly using the following rex:   | rex field=Message "Product: (?<appname>[^\-]+)"    Could you please help me fix the query?   index=wineventlog EventCode IN ("11724","11707") Message="*sampleapp*" | rex field=Message "Product: (?<appname>[^\-]+)" | stats latest(eval(if(EventCode="11724",_time,""))) AS uninstall latest(eval(if(EventCode="11707",_time,""))) AS install dc(EventCode) AS EventCode_count BY host appname | eval diff=install-uninstall | where (EventCode_count=1 AND EventCode="11724") OR (EventCode_count=2 AND diff<300)  
This shows  /opt/log/ type = directory /opt/log/cisco_ironport_web.log file position = 207575 file size = 207575 parent = /opt/log/ percent = 100.00 type = finished reading that splunk has read th... See more...
This shows  /opt/log/ type = directory /opt/log/cisco_ironport_web.log file position = 207575 file size = 207575 parent = /opt/log/ percent = 100.00 type = finished reading that splunk has read this one log file 100%. This means that it had sent it to indexers (I suppose that this has defined in your inputs.conf).  Why you don’t see those? There could be several reasons for that wrong timestamp recognition  wrong index definition  you have some transformations for drop those something else To tell the real reason you should try to query those e.g.  index=* earliest=1 latest=+5y that shows if those have wrong time or those have gone to wrong index. You should also check all conf files from UF to indexers and SH to see if there is something weird.
Hi @alesyo  I think the JSON in my example shouldnt affect the outcome, as this was purely a way for me to provide a working example. You could use "fields" to list the fields you are interested in ... See more...
Hi @alesyo  I think the JSON in my example shouldnt affect the outcome, as this was purely a way for me to provide a working example. You could use "fields" to list the fields you are interested in before running the foreach command? index=notable .etc... | fields id interstingField1 interestingField2 ..etc.. | foreach * [| eval summary=mvappend(summary,IF(<<FIELD>>!="" and "<<FIELD>>"!="summary" and "<<FIELD>>"!="id", "<<FIELD>>=".<<FIELD>>,null()))] | eval summary_output="Id:".id." - ".mvjoin(summary," ") | fields summary_output Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @rjastrze  As this is a Splunk Works developed app, I think the best approach would be to open a support ticket and request that this be installed on your stack, or ask them to look into having i... See more...
Hi @rjastrze  As this is a Splunk Works developed app, I think the best approach would be to open a support ticket and request that this be installed on your stack, or ask them to look into having it cloud vetted. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Cooked:tcp : tcp Raw:tcp : tcp TailingProcessor:FileStatus : $SPLUNK_HOME/etc/apps/sample_app/logs type = missing $SPLUNK_HOME/etc/splunk.version file position = 70 file size = 70 percent =... See more...
Cooked:tcp : tcp Raw:tcp : tcp TailingProcessor:FileStatus : $SPLUNK_HOME/etc/apps/sample_app/logs type = missing $SPLUNK_HOME/etc/splunk.version file position = 70 file size = 70 percent = 100.00 type = finished reading $SPLUNK_HOME/var/log/splunk type = directory $SPLUNK_HOME/var/log/splunk/configuration_change.log type = directory $SPLUNK_HOME/var/log/splunk/license_usage_summary.log type = directory $SPLUNK_HOME/var/log/splunk/metrics.log type = directory $SPLUNK_HOME/var/log/splunk/splunk_instrumentation_cloud.log* type = directory $SPLUNK_HOME/var/log/splunk/splunkd.log type = directory $SPLUNK_HOME/var/log/watchdog/watchdog.log* type = directory $SPLUNK_HOME/var/run/splunk/search_telemetry/*search_telemetry.json type = directory $SPLUNK_HOME/var/spool/splunk/tracker.log* type = directory /opt/log/ type = directory /opt/log/cisco_ironport_web.log file position = 207575 file size = 207575 parent = /opt/log/ percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/audit.log file position = 159471 file size = 159471 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = open file /opt/splunkforwarder/var/log/splunk/btool.log file position = 192268 file size = 192268 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/conf.log file position = 9044 file size = 9044 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/configuration_change.log file position = 3353479 file size = 3353479 parent = $SPLUNK_HOME/var/log/splunk/configuration_change.log percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/first_install.log file position = 70 file size = 70 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/health.log file position = 785728 file size = 785728 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/license_usage.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/license_usage_summary.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk/license_usage_summary.log percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/mergebuckets.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/metrics.log file position = 21630761 file size = 21630761 parent = $SPLUNK_HOME/var/log/splunk/metrics.log percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/metrics.log.1 file position = 25000026 file size = 25000026 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/metrics.log.2 file position = 25000081 file size = 25000081 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/mongod.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/remote_searches.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/scheduler.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/search_messages.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/searchhistory.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/splunk_instrumentation_cloud.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk/splunk_instrumentation_cloud.log* percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/splunkd-utility.log file position = 69012 file size = 69012 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/splunkd.log file position = 12378562 file size = 12378562 parent = $SPLUNK_HOME/var/log/splunk/splunkd.log percent = 100.00 type = open file /opt/splunkforwarder/var/log/splunk/splunkd_access.log file position = 44571 file size = 44571 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = open file /opt/splunkforwarder/var/log/splunk/splunkd_stderr.log file position = 200 file size = 200 parent = $SPLUNK_HOME/var/log/splunk percent = 100.00 type = finished reading /opt/splunkforwarder/var/log/splunk/splunkd_stdout.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/splunk/wlm_monitor.log file position = 0 file size = 0 parent = $SPLUNK_HOME/var/log/splunk percent = 100 type = finished reading /opt/splunkforwarder/var/log/watchdog/watchdog.log file position = 12202 file size = 12202 parent = $SPLUNK_HOME/var/log/watchdog/watchdog.log* percent = 100.00 type = finished reading tcp_cooked:listenerports : 8089  
The current version is not available for the cloud. According to conversations with Splunk Support, the update addresses a skipped jobs issue when the status on Salesforce REST API tool is at idle. ... See more...
The current version is not available for the cloud. According to conversations with Splunk Support, the update addresses a skipped jobs issue when the status on Salesforce REST API tool is at idle. Please update version 1.0.6 for cloud support compatibility and ensure future updates are cloud compatible. 
thank you @livehybrid  i ended up creating a ticket with splunk support
@samuel-devops  Make sure nothing else is using the same ports. Check if the container is binding properly: netstat -tulnp | grep 8089 or inside the container: docker exec -it uf netstat -tulnp