All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I also want to encrypt personal data in the collected data at index time and decrypt it at search time, just like human96. It seems that implementing decryption at search time can be done wi... See more...
Hello, I also want to encrypt personal data in the collected data at index time and decrypt it at search time, just like human96. It seems that implementing decryption at search time can be done with a custom command, but I'm currently researching and contemplating how to encrypt a specific field at index time. Have you implemented a method to encrypt a specific field at index time? Your insights would be greatly appreciated.
Hi,  I am having an issue with my data ingestion. I have a xml log file that I am ingesting that is 1GB in size but is taking up to 18GB of my ingestion license. How do I correct this. I did a tes... See more...
Hi,  I am having an issue with my data ingestion. I have a xml log file that I am ingesting that is 1GB in size but is taking up to 18GB of my ingestion license. How do I correct this. I did a test to convert the data to json but I am still seeing mix matches between the log file size and the data being ingested into Splunk.  I am checking on the daily ingestion vs daily log size 
Perfect!  exactly what I was after.   Many thanks.
Tell the stats command you want the values of field4. |fields job_no, field2, field4 |dedup job_no, field2 |stats count, dc(field4) AS dc_field4, values(field4) as field4 by job_no |eval calc=dc_fie... See more...
Tell the stats command you want the values of field4. |fields job_no, field2, field4 |dedup job_no, field2 |stats count, dc(field4) AS dc_field4, values(field4) as field4 by job_no |eval calc=dc_field4 * count  
What I am trying to do is graph / timechart active users.   I am starting with this query: index=anIndex sourcetype=perflogs  | rex field=_raw "^(?:[^,\n]*,){2}(?P<LoginUserID>\w+\.\w+)" | timecha... See more...
What I am trying to do is graph / timechart active users.   I am starting with this query: index=anIndex sourcetype=perflogs  | rex field=_raw "^(?:[^,\n]*,){2}(?P<LoginUserID>\w+\.\w+)" | timechart distinct_count(LoginUserID) partial=false This works and the resulting graph appears to be correct for 120 mins resulting in 5min time buckets.  Then if I shorten the time period down to 60 mins resulting in 1 min buckets then I have a question. In the 120 min graph with 5 min buckets @ 6:40-6:45 I have 318 Distinct Users but in the 90 min graph with 1 min buckets each 1 min bucket has 136, 144, 142, 131, 117 Distinct Users. I understand that a user can be active one minute and inactive the next min or two and then active again on the 4th/5th min which is what is happening? My question is how to get this to show across the one minute bin's users that were active in the previous 5, 1 min buckets resulting in a # that represents users that are logged in and not just active ? I believe I can add minspan=5min as a kludge but am wondering if there is a way to get this do what im trying to show at the 1min span ? I believe what I need to do is run two queries the first one as is above, then use an append that will query for  events from -5min to -10min.   But, from what I have been trying it either is not working or not doing it correctly. Basically im trying to find those userID's that are active in the first time bucket (1 min) that were also active in the previous time bucket(s) then do a distinct_count(..) on the usersID's collected from both queries ?
We have a Splunk v9.1.1 cluster with a three search head SHC running on EC2 instances in AWS. In implementing disaster recovery (DR) for the SHC, I configured AWS Autoscaling to replace the search he... See more...
We have a Splunk v9.1.1 cluster with a three search head SHC running on EC2 instances in AWS. In implementing disaster recovery (DR) for the SHC, I configured AWS Autoscaling to replace the search heads on failure. Unfortunately with Autoscaling, it does NOT re-use the IP of the failed instance on the new instance, probably due to other use cases of up- and down-scaling. So new replacement instances will always have new/different IPs than that of the failed instance. Starting with a healthy cluster with an elected search head captain and RAFT running, I terminated one search head. During the minute or two that it took AWS Autoscaling to replace the search head instance, RAFT stopped and there was no captain. I was then unable to add a NEW third search head to the cluster. OK, so then I created a similar scenario but this time had Autoscaling issue the commands to force one of the remaining two search heads to be an UN-ELECTED static captain - and then confirmed this had worked; I had two search heads, one being a captain. In the Splunk documentation, it mentions using a Static Captain for DR. However, when I again tried to add the new instance as the third search head, I again received the error that RAFT was not running, there was no cluster, and therefore the member could not be added! So what is Splunk's recommendation for Disaster Recovery in this situation? I understand this is a chicken-and-egg scenario, but how are you expected to recover if you can't get a third search head in place in order TO recover? It seems counter-intuitive that Splunk would disallow adding a third search head, especially with the static search head captain in place. There are some configurable timeout parameters in server.conf in the [shcluster] stanza - would increasing any of these values keep the SHC in place long enough for Autoscaling to replace that third search head instance such that it can then join the SHC? If so, which timeouts should I use, and which values would be appropriate that they wouldn't interfere with the day-in, day-out usage? I'm stuck on this and haven't been able to progress any further. Any and all help is greatly appreciated!
Hello again splunk experts This is my current situation:- job_no                field4 131                      string1                                string2 132                      string3  ... See more...
Hello again splunk experts This is my current situation:- job_no                field4 131                      string1                                string2 132                      string3                               string4 |table job_no, field2, field4|dedup, job_no, field2 |stats count dc(field4) AS dc_field4 by job_no |eval calc=dc_field4 * count produces:- job_no                                       field2                                        dc_field4                              calc 131 6 2 12 132 6 2 12 This all works fine.  The problem is that I also want to include the strings (string1,string2,string3,string4) in my table.  Like this:- job_no                                                                   field4                                                               field2       dc_field4     calc 131 string1, string2 6 2 12 132 string3, string4 6 2 12   Any help would be greatly appreciated,
I see.  I was thinking it wasn't UF because every other instance of UF I've seen used /opt/splunkforwarder.
UBA isn't a Splunk Enterprise instance, but it does include a Splunk Universal Forwarder (UF) as part of its install (see Directories created or modified on the disk section of docs).    So, you sh... See more...
UBA isn't a Splunk Enterprise instance, but it does include a Splunk Universal Forwarder (UF) as part of its install (see Directories created or modified on the disk section of docs).    So, you should have a UF living at /opt/splunk for your UBA instance, and that's what you'll want to make sure is hooked up to the rest of your Splunk deployment.  Also note the Splunk platform port requirements section on that page for more info about that UF instance running alongside the UBA install.
I think this should work. You shouldn't need to include the host=* or sourcetype=* as every event has a host and sourcetype. |tstats count from datamodel={your data model} where index="es_web" AND {... See more...
I think this should work. You shouldn't need to include the host=* or sourcetype=* as every event has a host and sourcetype. |tstats count from datamodel={your data model} where index="es_web" AND {your data model}.action="blocked" by {your data model}.category |sort 5 -count
Doh! that will teach me for not testing. I'm sure that dashboard worked on an earlier version of Splunk, but you're right, it keeps those pesky commas. Here's an updated version with two key changes... See more...
Doh! that will teach me for not testing. I'm sure that dashboard worked on an earlier version of Splunk, but you're right, it keeps those pesky commas. Here's an updated version with two key changes:  I've set up the drilldown token to have a value when the dashboard first loads - in case you click the drilldown link before you've changed the dropdown. The regex is fine - but the token seems to be treated like a multivalue field - so I've simply concatenated an empty string to it: $form.multi$ + ""  - this tricks Splunk into treating it like a normal string.    <form version="1.1"> <init> <set token="drilldown">form.multi=val1</set> </init> <label>Send Multi Value Token Drilldown</label> <fieldset submitButton="true" autoRun="false"> <input type="multiselect" token="multi" searchWhenChanged="true"> <label>Multiselect</label> <choice value="val1">key1</choice> <choice value="val2">key2</choice> <choice value="val3">key3</choice> <choice value="val4">key4</choice> <choice value="val5">key5</choice> <default>val1</default> <prefix>(</prefix> <suffix>)</suffix> <initialValue>val1</initialValue> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter>,</delimiter> <change> <eval token="drilldown">replace($form.multi$ + "","([^,]+),?","&amp;form.multi=$1")</eval> </change> </input> </fieldset> <row> <panel> <title>Token Values</title> <html> <h1>Token: $multi$</h1> <h1>Drildown Token: $drilldown$</h1> <a href="send_multi_value_token_drilldown?$drilldown$" target="_BLANK">Drilldown</a> </html> </panel> </row> </form>   I think that version should work - tested on a Splunk Cloud instance. Cheers, Daniel
Have you taken a look in your Monitoring Console within the Splunk UI? E.g.    $Your Splunk Instance$/app/splunk_monitoring_console/scheduler_activity_instance       You can also open up th... See more...
Have you taken a look in your Monitoring Console within the Splunk UI? E.g.    $Your Splunk Instance$/app/splunk_monitoring_console/scheduler_activity_instance       You can also open up the searches for those various panels to see what data they are looking at (a lot of them are hitting REST endpoints)   (on edit:  I am now realizing that you might be referring to specific SVC usage from a cloud perspective and not just "what are my scheduled searches doing?  But if this sort of metric doesn't exist for the cloud admin dashboards, maybe they should...)    
Yep, I'm pretty sure. If you "overlap" the same file within two separate stanzas it will get monitored only once.
1. If I remember correctly, you can do a delta report in Nessus 2. Instead of bending over backwards with comparing summarized stats, approach it from the other side. Categorize your events first... See more...
1. If I remember correctly, you can do a delta report in Nessus 2. Instead of bending over backwards with comparing summarized stats, approach it from the other side. Categorize your events first, then do a summary. I'm not 100% sure about your data and your desired outcome but I would probably try to approach it like this: index=nessus Risk=Medium earliest=-9d | eval state=if(_time<now()-7*85400,"OLD","NEW") | stats values(state) as state by CVE extracted_Host This will give you a summary with a field (possibly multivalued) telling you whether it was in the old scan, the new one or both. Now you can decide what is the final status depending on the state of CVE in the old scan and the new one | eval status=case(state="OLD" and state="NEW","still open",state="OLD","closed",state="NEW","Yummy, a fresh one!")
May be the taging is not done it in a right way. Where we need to check further?
Ahhh... so it's windows. Ok, firstly check if your port is open. You can do that with netstat command from cmd or powershell. I don't remember the right switches to list listening ports for windows ... See more...
Ahhh... so it's windows. Ok, firstly check if your port is open. You can do that with netstat command from cmd or powershell. I don't remember the right switches to list listening ports for windows version though. Anyway, if the port was closed, you should get a connection rejected error, not a timeout.
Hi @Adiaobong.Odungide, Sorry for the late reply, this is what I heard back from the Docs team. The `key` is whatever you want to use to identify the custom data; the `value` is the data you want... See more...
Hi @Adiaobong.Odungide, Sorry for the late reply, this is what I heard back from the Docs team. The `key` is whatever you want to use to identify the custom data; the `value` is the data you want to capture and attach to the snapshot. The call to `txn.addSnapshotData()` can occur anywhere in your application where a transaction is in progress, either programmatically created via `appd.startTransaction(...)` or auto-discovered and retrieved via `appd.getTransaction(request)`. To add custom snapshot data to a transaction that's already being detected and reported, `getTransaction()` would be the required approach.
Hi Deitrich, first "host" after  "displayName" option defines VCenter's IP address for example, if your vmcenter's service URL is like this below https://vcenter.onotio.com you need to define lik... See more...
Hi Deitrich, first "host" after  "displayName" option defines VCenter's IP address for example, if your vmcenter's service URL is like this below https://vcenter.onotio.com you need to define like this below If you want to monitor all VMhosts in this Vcenter (in your example you mention approx 200 VmHost ) and All VMs in these 200 VMhosts, your hostConfig section must be like this below. This means getting all VmHosts' metrics and all VMs' metrics which is under these 200 VmHost. But with this configuration based on your VMs and VmHost count, you need to increase "numberofThreads" Counts (for example 10-15 or 20) and machine agents agent max metrics value (default value is 450, too less for this kind of scenario)   for 10000 metrics you need to install a machine agent with Java options below. -Dappdynamics.agent.maxMetrics=10000  to sum up I'm giving enough examples for your all hidden question below This config yml gathering metrics from vcenter.onotio.com for 3 diferent Vm Host: 10.1.31.12 10.1.1.5 10.1.15.7 and also  vm1.onotio.intra ' s VM metrics vm2.onotio.infra' s VM metric and all VMs' metrics from 10.1.15.7 VmHost. If you need further help please feel free. Thanks Cansel 
>>>Basically, this is a question , able to see events till 4:00 am and after that not able to see. Hi @Praz_123 ... you were able to see logs/events till 4am and then not able to see, (for the host ... See more...
>>>Basically, this is a question , able to see events till 4:00 am and after that not able to see. Hi @Praz_123 ... you were able to see logs/events till 4am and then not able to see, (for the host with ip  161.209.202.108... next time please avoid the ip addresses in your post, for security concerns) maybe... there are not events/logs after 4am at all. so, you should check the team or person who creates those events/logs(at the required host)   Iif you are looking for more details, Pls update us with more info, thanks. 
Hi @Deepak.Paste, What kind of help do you need? Can you please give me some details to help you? Thanks Cansel