All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We have a Splunk v9.1.1 cluster with a three search head SHC running on EC2 instances in AWS. In implementing disaster recovery (DR) for the SHC, I configured AWS Autoscaling to replace the search he... See more...
We have a Splunk v9.1.1 cluster with a three search head SHC running on EC2 instances in AWS. In implementing disaster recovery (DR) for the SHC, I configured AWS Autoscaling to replace the search heads on failure. Unfortunately with Autoscaling, it does NOT re-use the IP of the failed instance on the new instance, probably due to other use cases of up- and down-scaling. So new replacement instances will always have new/different IPs than that of the failed instance. Starting with a healthy cluster with an elected search head captain and RAFT running, I terminated one search head. During the minute or two that it took AWS Autoscaling to replace the search head instance, RAFT stopped and there was no captain. I was then unable to add a NEW third search head to the cluster. OK, so then I created a similar scenario but this time had Autoscaling issue the commands to force one of the remaining two search heads to be an UN-ELECTED static captain - and then confirmed this had worked; I had two search heads, one being a captain. In the Splunk documentation, it mentions using a Static Captain for DR. However, when I again tried to add the new instance as the third search head, I again received the error that RAFT was not running, there was no cluster, and therefore the member could not be added! So what is Splunk's recommendation for Disaster Recovery in this situation? I understand this is a chicken-and-egg scenario, but how are you expected to recover if you can't get a third search head in place in order TO recover? It seems counter-intuitive that Splunk would disallow adding a third search head, especially with the static search head captain in place. There are some configurable timeout parameters in server.conf in the [shcluster] stanza - would increasing any of these values keep the SHC in place long enough for Autoscaling to replace that third search head instance such that it can then join the SHC? If so, which timeouts should I use, and which values would be appropriate that they wouldn't interfere with the day-in, day-out usage? I'm stuck on this and haven't been able to progress any further. Any and all help is greatly appreciated!
Hello again splunk experts This is my current situation:- job_no                field4 131                      string1                                string2 132                      string3  ... See more...
Hello again splunk experts This is my current situation:- job_no                field4 131                      string1                                string2 132                      string3                               string4 |table job_no, field2, field4|dedup, job_no, field2 |stats count dc(field4) AS dc_field4 by job_no |eval calc=dc_field4 * count produces:- job_no                                       field2                                        dc_field4                              calc 131 6 2 12 132 6 2 12 This all works fine.  The problem is that I also want to include the strings (string1,string2,string3,string4) in my table.  Like this:- job_no                                                                   field4                                                               field2       dc_field4     calc 131 string1, string2 6 2 12 132 string3, string4 6 2 12   Any help would be greatly appreciated,
I see.  I was thinking it wasn't UF because every other instance of UF I've seen used /opt/splunkforwarder.
UBA isn't a Splunk Enterprise instance, but it does include a Splunk Universal Forwarder (UF) as part of its install (see Directories created or modified on the disk section of docs).    So, you sh... See more...
UBA isn't a Splunk Enterprise instance, but it does include a Splunk Universal Forwarder (UF) as part of its install (see Directories created or modified on the disk section of docs).    So, you should have a UF living at /opt/splunk for your UBA instance, and that's what you'll want to make sure is hooked up to the rest of your Splunk deployment.  Also note the Splunk platform port requirements section on that page for more info about that UF instance running alongside the UBA install.
I think this should work. You shouldn't need to include the host=* or sourcetype=* as every event has a host and sourcetype. |tstats count from datamodel={your data model} where index="es_web" AND {... See more...
I think this should work. You shouldn't need to include the host=* or sourcetype=* as every event has a host and sourcetype. |tstats count from datamodel={your data model} where index="es_web" AND {your data model}.action="blocked" by {your data model}.category |sort 5 -count
Doh! that will teach me for not testing. I'm sure that dashboard worked on an earlier version of Splunk, but you're right, it keeps those pesky commas. Here's an updated version with two key changes... See more...
Doh! that will teach me for not testing. I'm sure that dashboard worked on an earlier version of Splunk, but you're right, it keeps those pesky commas. Here's an updated version with two key changes:  I've set up the drilldown token to have a value when the dashboard first loads - in case you click the drilldown link before you've changed the dropdown. The regex is fine - but the token seems to be treated like a multivalue field - so I've simply concatenated an empty string to it: $form.multi$ + ""  - this tricks Splunk into treating it like a normal string.    <form version="1.1"> <init> <set token="drilldown">form.multi=val1</set> </init> <label>Send Multi Value Token Drilldown</label> <fieldset submitButton="true" autoRun="false"> <input type="multiselect" token="multi" searchWhenChanged="true"> <label>Multiselect</label> <choice value="val1">key1</choice> <choice value="val2">key2</choice> <choice value="val3">key3</choice> <choice value="val4">key4</choice> <choice value="val5">key5</choice> <default>val1</default> <prefix>(</prefix> <suffix>)</suffix> <initialValue>val1</initialValue> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>,</delimiter> <change> <eval token="drilldown">replace($form.multi$ + "","([^,]+),?","&amp;form.multi=$1")</eval> </change> </input> </fieldset> <row> <panel> <title>Token Values</title> <html> <h1>Token: $multi$</h1> <h1>Drildown Token: $drilldown$</h1> <a href="send_multi_value_token_drilldown?$drilldown$" target="_BLANK">Drilldown</a> </html> </panel> </row> </form>   I think that version should work - tested on a Splunk Cloud instance. Cheers, Daniel
Have you taken a look in your Monitoring Console within the Splunk UI? E.g.    $Your Splunk Instance$/app/splunk_monitoring_console/scheduler_activity_instance       You can also open up th... See more...
Have you taken a look in your Monitoring Console within the Splunk UI? E.g.    $Your Splunk Instance$/app/splunk_monitoring_console/scheduler_activity_instance       You can also open up the searches for those various panels to see what data they are looking at (a lot of them are hitting REST endpoints)   (on edit:  I am now realizing that you might be referring to specific SVC usage from a cloud perspective and not just "what are my scheduled searches doing?  But if this sort of metric doesn't exist for the cloud admin dashboards, maybe they should...)    
Yep, I'm pretty sure. If you "overlap" the same file within two separate stanzas it will get monitored only once.
1. If I remember correctly, you can do a delta report in Nessus 2. Instead of bending over backwards with comparing summarized stats, approach it from the other side. Categorize your events first... See more...
1. If I remember correctly, you can do a delta report in Nessus 2. Instead of bending over backwards with comparing summarized stats, approach it from the other side. Categorize your events first, then do a summary. I'm not 100% sure about your data and your desired outcome but I would probably try to approach it like this: index=nessus Risk=Medium earliest=-9d | eval state=if(_time<now()-7*85400,"OLD","NEW") | stats values(state) as state by CVE extracted_Host This will give you a summary with a field (possibly multivalued) telling you whether it was in the old scan, the new one or both. Now you can decide what is the final status depending on the state of CVE in the old scan and the new one | eval status=case(state="OLD" and state="NEW","still open",state="OLD","closed",state="NEW","Yummy, a fresh one!")
May be the taging is not done it in a right way. Where we need to check further?
Ahhh... so it's windows. Ok, firstly check if your port is open. You can do that with netstat command from cmd or powershell. I don't remember the right switches to list listening ports for windows ... See more...
Ahhh... so it's windows. Ok, firstly check if your port is open. You can do that with netstat command from cmd or powershell. I don't remember the right switches to list listening ports for windows version though. Anyway, if the port was closed, you should get a connection rejected error, not a timeout.
Hi @Adiaobong.Odungide, Sorry for the late reply, this is what I heard back from the Docs team. The `key` is whatever you want to use to identify the custom data; the `value` is the data you want... See more...
Hi @Adiaobong.Odungide, Sorry for the late reply, this is what I heard back from the Docs team. The `key` is whatever you want to use to identify the custom data; the `value` is the data you want to capture and attach to the snapshot. The call to `txn.addSnapshotData()` can occur anywhere in your application where a transaction is in progress, either programmatically created via `appd.startTransaction(...)` or auto-discovered and retrieved via `appd.getTransaction(request)`. To add custom snapshot data to a transaction that's already being detected and reported, `getTransaction()` would be the required approach.
Hi Deitrich, first "host" after  "displayName" option defines VCenter's IP address for example, if your vmcenter's service URL is like this below https://vcenter.onotio.com you need to define lik... See more...
Hi Deitrich, first "host" after  "displayName" option defines VCenter's IP address for example, if your vmcenter's service URL is like this below https://vcenter.onotio.com you need to define like this below If you want to monitor all VMhosts in this Vcenter (in your example you mention approx 200 VmHost ) and All VMs in these 200 VMhosts, your hostConfig section must be like this below. This means getting all VmHosts' metrics and all VMs' metrics which is under these 200 VmHost. But with this configuration based on your VMs and VmHost count, you need to increase "numberofThreads" Counts (for example 10-15 or 20) and machine agents agent max metrics value (default value is 450, too less for this kind of scenario)   for 10000 metrics you need to install a machine agent with Java options below. -Dappdynamics.agent.maxMetrics=10000  to sum up I'm giving enough examples for your all hidden question below This config yml gathering metrics from vcenter.onotio.com for 3 diferent Vm Host: 10.1.31.12 10.1.1.5 10.1.15.7 and also  vm1.onotio.intra ' s VM metrics vm2.onotio.infra' s VM metric and all VMs' metrics from 10.1.15.7 VmHost. If you need further help please feel free. Thanks Cansel 
>>>Basically, this is a question , able to see events till 4:00 am and after that not able to see. Hi @Praz_123 ... you were able to see logs/events till 4am and then not able to see, (for the host ... See more...
>>>Basically, this is a question , able to see events till 4:00 am and after that not able to see. Hi @Praz_123 ... you were able to see logs/events till 4am and then not able to see, (for the host with ip  161.209.202.108... next time please avoid the ip addresses in your post, for security concerns) maybe... there are not events/logs after 4am at all. so, you should check the team or person who creates those events/logs(at the required host)   Iif you are looking for more details, Pls update us with more info, thanks. 
Hi @Deepak.Paste, What kind of help do you need? Can you please give me some details to help you? Thanks Cansel
We are also facing the same issue had to revert back to 2022.5 and no updates we can find on Alamofire compatibility in past months as well.
Hello bowesmana, The transaction command worked.  Memory was at 16% when the search was started, and the search ran for 72 hours with the transaction command, but memory utilization stayed at 16% ev... See more...
Hello bowesmana, The transaction command worked.  Memory was at 16% when the search was started, and the search ran for 72 hours with the transaction command, but memory utilization stayed at 16% every time checked. So the transaction command doesn't have the huge memory requirements issue that the dedup command has. The overall count needed was all the events in that 24-hour period, and then all the events in that same 24-hour period minus exact duplicate events. As mentioned, was able to get the count of all the events, minus the duplicates, using the transaction command. So all is good. The overall reason for this post was to find out if the dedup command possibly had a defect in SE 9.5.0, and you answered that the dedup command is designed that way.  Although, that means the dedup command is basically useless with larger data sets.
Thanks for your reply .  I added this eval statement in to the search . The result is different . It is supposed to combine different LOBs results into one result . but the max value of the blue col... See more...
Thanks for your reply .  I added this eval statement in to the search . The result is different . It is supposed to combine different LOBs results into one result . but the max value of the blue column at OCT 10 is a lot less then the green one 33 of the previous screenshot. The green column's value should be included in the blue column now. so  , the max should be the same.  No sure why the result is different now.    
Have you tried something like this (assuming ServiceDown is a string)? index=foo (trap=ServiceDown OR trap=Good) earliest=-6m | dedup ```add a field that contains device name``` | where (trap="Servi... See more...
Have you tried something like this (assuming ServiceDown is a string)? index=foo (trap=ServiceDown OR trap=Good) earliest=-6m | dedup ```add a field that contains device name``` | where (trap="ServiceDown" AND _time <= relative_time(now(), "-5m"))  
I don't know why you're not seeing the sourcetype field.  Every event should have that field.