All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

An image is worth a thousand words https://docs.bmc.com/xwiki/bin/view/Standalone/Product-Support/productinfo/BMC-AMI-Defender-App-for-Splunk/
Finally, the app was a little deaper on the BMC website. The link on splunkbase doesn't work. BMC AMI Defender App for Splunk - BMC Documentation You need to signin on the BMC website and you nee... See more...
Finally, the app was a little deaper on the BMC website. The link on splunkbase doesn't work. BMC AMI Defender App for Splunk - BMC Documentation You need to signin on the BMC website and you need a valid subscription to acces the download. Usually, your BMC admin in your organization should have that access.
Hi, does someone knows where we can download the app for the BMC AMI Defender logs. Splunk base provides a link to a BMC 404 page not found error. I looked around on the BMC site when logged in, but... See more...
Hi, does someone knows where we can download the app for the BMC AMI Defender logs. Splunk base provides a link to a BMC 404 page not found error. I looked around on the BMC site when logged in, but couldn't find the app. BMC AMI Defender | Splunkbase Is there another place we can grab this APP? The BMC documentation of the APP is here: BMC AMI Defender App for Splunk 2.9 - BMC Documentation Thanks!
Thanks for linking that article, I haven't seen it and it's got some handy tips.  1) Yes, this works. 3) Unfortunately, that seems to be the tradeoff based on what you're trying to do. When you fil... See more...
Thanks for linking that article, I haven't seen it and it's got some handy tips.  1) Yes, this works. 3) Unfortunately, that seems to be the tradeoff based on what you're trying to do. When you filter (rest/artifact) you're look for any artifacts which match your search results. When you request object detail, (rest/artifact/5/name) you're restricting your results to artifact 5 specifically.  Based on your question, I'm guessing you're going to want something along these lines: /rest/artifact?_filter_cef__destinationAddress={SNow CI}&page_size=0 I don't think you'll be able to get of looping through your results one way or another. 
Either I'm missing something here or you're overly complicating it. Why there is a separate json1 and json2? Sticking with your mockup data | makeresults count=5 | streamstats count as a | eval _t... See more...
Either I'm missing something here or you're overly complicating it. Why there is a separate json1 and json2? Sticking with your mockup data | makeresults count=5 | streamstats count as a | eval _time = _time + (60*a) | eval json1="{\"id\":1,\"attrib_A\":\"A1\"}#{\"id\":2,\"attrib_A\":\"A2\"}#{\"id\":3,\"attrib_A\":\"A3\"}#{\"id\":4,\"attrib_A\":\"A4\"}#{\"id\":5,\"attrib_A\":\"A5\"}", json2="{\"id\":2,\"attrib_B\":\"B2\"}#{\"id\":3,\"attrib_B\":\"B3\"}#{\"id\":4,\"attrib_B\":\"B4\"}#{\"id\":6,\"attrib_B\":\"B6\"}" | makemv delim="#" json1 | makemv delim="#" json2 | eval json=mvappend(json1,json2) | fields _time json | mvexpand json | spath input=json | fields - json | stats values(attrib_*) as * by _time id You can of course add fillnull at the end.
This worked for me.  My pie chart was just one of a few panels in a dashboard's row.  When I would open that panel search on its own, even fields with small values would be shown because it was a pie... See more...
This worked for me.  My pie chart was just one of a few panels in a dashboard's row.  When I would open that panel search on its own, even fields with small values would be shown because it was a pie chart in its own "row".  Moving my pie chart into its own dashboard row and making it larger then showed small values in the visualization.  Sadly, I have not found any other changes to the visualization settings, search parameters/settings, that would force a small pie chart to show all fields that have small values.  Quite annoying.
"Forwarding" means exactly what is happening - you receive an event on input, you send it to output(s). That's it. "Mirroring" of already existing data would be achievable only in cluster setup. Wit... See more...
"Forwarding" means exactly what is happening - you receive an event on input, you send it to output(s). That's it. "Mirroring" of already existing data would be achievable only in cluster setup. Without a cluster you can only migrate the data offline.
To clarify, my container has a restart policy of "unless-stopped", so when the container exits after the failed ansible task, docker is restarting it. If you run it without that policy, it will run o... See more...
To clarify, my container has a restart policy of "unless-stopped", so when the container exits after the failed ansible task, docker is restarting it. If you run it without that policy, it will run one, fail the ansible task, and exit.
Thanks for linking that article, I haven't seen it and it's got some handy tips.  1) Yes. In my limited testing, this works. 3) Unfortunately, that seems to be the tradeoff based on what you're try... See more...
Thanks for linking that article, I haven't seen it and it's got some handy tips.  1) Yes. In my limited testing, this works. 3) Unfortunately, that seems to be the tradeoff based on what you're trying to do. When you filter (rest/artifact) you're look for any artifacts which match your search results. When you request object detail, (rest/artifact/5/name) you're restricting your results to artifact 5 specifically.  If you want to give an example of your specific flow, we can probably come up with a more detailed answer. I'm guessing you're going to want something roughly along these lines: /rest/artifact?_filter_cef__destinationAddress={SNOW INC CI}&_filter_status="new"&page_size=0   Unfortunately, I don't think you'll be able to avoid looping through the results one way or another.  
Hi usually this should do when you are migrating to new server. There are couple of old posts how this can do e.g. https://community.splunk.com/t5/Installation/Upgrading-and-migrating-to-a-new-host-... See more...
Hi usually this should do when you are migrating to new server. There are couple of old posts how this can do e.g. https://community.splunk.com/t5/Installation/Upgrading-and-migrating-to-a-new-host-how-to-migrate-large/td-p/601048 I’m afraid that your new server have all same (named) indexes as old node have? Also they have starting index numbering from start (0 or 1). This leads that you could have equally named buckets in both nodes (at least hot buckets). For that reason you couldn’t do rsync without losing some events or you are in situation when splunk cannot start. https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/HowSplunkstoresindexes#What_the_index_directories_look_like see bucket names. In theory there are probably two or three way to do it.  Create a new clustered indexers add/migrate those both nodes there, then transfer all buckets into one node and then go back to single node or keep it as one node cluster. Transfer old indexes into new node but change index names and later use both indexes when you are make queries and/or create eventtypes for those. Transfer old indexes to new node and ensure that there haven’t been any bucket name collisions. You must rename those directories before transfer. You also must update <index name>.dat file to highest <local id>+1 before starting nod again. It’s mandatory to test these scenarios before you do it with your production nodes! Also take offline backups before transfer, so you could rollback if/when needed. I haven’t try any of these as I have always migrate the data into new node before switchover. Also ensure that you have same splunk version on both node! If I must do this I probably chose 1st option? r. Ismo  
It looks like the container is continually restarting. It fails that last task, aborts, and the container restarts. Splunk UF does start up as I see logs from the container in my lab's _internal inde... See more...
It looks like the container is continually restarting. It fails that last task, aborts, and the container restarts. Splunk UF does start up as I see logs from the container in my lab's _internal index. This looks to have changed ~7 days ago. This appears to be a broken image. Tags latest, 9.4, 9.3, 9.2, etc. Tag 9.3.2 from 4 months ago works as expected. https://hub.docker.com/r/splunk/universalforwarder/tags
I think you missed the part @kiran_panchavat where @samuel-devops said splunk is up running fine. For what it's worth, I've experienced the same thing with tags latest/9.4, 9.3, and 9.2. That last... See more...
I think you missed the part @kiran_panchavat where @samuel-devops said splunk is up running fine. For what it's worth, I've experienced the same thing with tags latest/9.4, 9.3, and 9.2. That last task (check_for_required_restarts) fails, but everything seems to start up fine. I will point out that this is new behavior. tag 9.3.2 for example is 4 months old and finishes it's ansible "init" as expected.
Hello, I am attempting to forward data from an older indexer to a new indexer so that I can decommission the server the old indexer currently sits on. These indexers are not currently clustered, and... See more...
Hello, I am attempting to forward data from an older indexer to a new indexer so that I can decommission the server the old indexer currently sits on. These indexers are not currently clustered, and the old is set up to forward to the new (so the indexes are all mirrored), but this was only sending new data, not any of the previously indexed data on the old. What are my options? Am I able to forcibly forward the old data to the new? Do I need to manually sync the old data and the new by passing the old buckets to the new indexer? Ideally I'd like to migrate the data over time (there's a fair amount), but in my research so far that doesn't appear feasible.
Is there a reason you're trying to manually change open_time? If you change from new to in progress, SOAR will set that field on its own. This also applies if you bump it back to new then change to i... See more...
Is there a reason you're trying to manually change open_time? If you change from new to in progress, SOAR will set that field on its own. This also applies if you bump it back to new then change to in progress again.
Unfortunately, monitoring users is not an easy task. There are huge challenges with it on every system I know - you either get too little data and it's easy to circumvent the auditing proces or you'r... See more...
Unfortunately, monitoring users is not an easy task. There are huge challenges with it on every system I know - you either get too little data and it's easy to circumvent the auditing proces or you're getting way too much data and it's hard to make anything of it. That's why there's a relatively big market for third-party solution - either agent-based or working as man-in-the-middle external component. If you're ok with the limitations, you could use the PROMPT_COMMAND method (but again - I wouldn't use world-writeable files), add to it some auditd logging and you have... something. Oh, and be very careful with using the term "realtime".
You simply can't. A regex matches a pattern. The pattern is static. It can contain some "recursive" elements but you can't put something like "today's date" as part of the pattern.
I managed to get this sorted out. With the way SOAR handles IDs, it's easiest to update workbooks as a whole than trying to focus on specific phases or tasks. To that end, when pulling your workbooks... See more...
I managed to get this sorted out. With the way SOAR handles IDs, it's easiest to update workbooks as a whole than trying to focus on specific phases or tasks. To that end, when pulling your workbooks, you'll want to get [base_url]/workbook_template?page_size=0  [base_url]/workbook_phase_template?page_size=0. Since they're stored separately you'll then want to stich the workbooks and phases together using the phases' template value. This is the same as its parent workbook's ID. Including some fields in your push can cause SOAR to reject the changes (usually with a 404 error). Workbooks (the top level): name, description, is_default, is_note_required, phases.  Phases (the middle level): name, order, and tasks. Tasks (bottom level): name, description, order, owner, role, sla, and suggestions. Delete I overcomplicated this for myself. A simple REST delete request to [base_url]/rest/workbook_template/[ID] will delete the workbook. Create Post your json with the required fields to [base_url]/rest/workbook_template. It's important to note there is no ID. Update Post your full json with the required fields for the workbook you're changing to [base_url]/rest/workbook_template/[ID]. SOAR is intelligent enough to recognize what the changes are and just update those pieces.
Thanks Will for your prompt response. Worked with Splunk Support. Unfortunately, Splunk Support cannot add the TA to cloud because it has not been verified for cloud usage. They also had no means of... See more...
Thanks Will for your prompt response. Worked with Splunk Support. Unfortunately, Splunk Support cannot add the TA to cloud because it has not been verified for cloud usage. They also had no means of identifying who were the developers to work with for vetting, and there isn't a link to a repo site for the app. We have similar thoughts you mention-- we are investigating adding the TA to one of our forwarders. However, we are looking for a more permanent solution.  The TA when it worked on Cloud took advantage of the HEC. The API keys were easily maintained through the TA, and it was centrally located within the HEC UI console. Having it centrally managed through cloud increased accessibility to cloud admins for support (which, in my organization's staffing model, is a larger group than the admins supporting the forwarder.) 
Hi @gcusello I am working on this exact query. The problem is that I do not get any results even though I have devices reporting only uninstall event code, which is 11724. The appname is being extrac... See more...
Hi @gcusello I am working on this exact query. The problem is that I do not get any results even though I have devices reporting only uninstall event code, which is 11724. The appname is being extracted correctly using the following rex:   | rex field=Message "Product: (?<appname>[^\-]+)"    Could you please help me fix the query?   index=wineventlog EventCode IN ("11724","11707") Message="*sampleapp*" | rex field=Message "Product: (?<appname>[^\-]+)" | stats latest(eval(if(EventCode="11724",_time,""))) AS uninstall latest(eval(if(EventCode="11707",_time,""))) AS install dc(EventCode) AS EventCode_count BY host appname | eval diff=install-uninstall | where (EventCode_count=1 AND EventCode="11724") OR (EventCode_count=2 AND diff<300)  
This shows  /opt/log/ type = directory /opt/log/cisco_ironport_web.log file position = 207575 file size = 207575 parent = /opt/log/ percent = 100.00 type = finished reading that splunk has read th... See more...
This shows  /opt/log/ type = directory /opt/log/cisco_ironport_web.log file position = 207575 file size = 207575 parent = /opt/log/ percent = 100.00 type = finished reading that splunk has read this one log file 100%. This means that it had sent it to indexers (I suppose that this has defined in your inputs.conf).  Why you don’t see those? There could be several reasons for that wrong timestamp recognition  wrong index definition  you have some transformations for drop those something else To tell the real reason you should try to query those e.g.  index=* earliest=1 latest=+5y that shows if those have wrong time or those have gone to wrong index. You should also check all conf files from UF to indexers and SH to see if there is something weird.