All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What do you mean? You want to see what queries your users run against your database server? Do you have apropriate logging set up in your database server? Do you ingest logs from that database server... See more...
What do you mean? You want to see what queries your users run against your database server? Do you have apropriate logging set up in your database server? Do you ingest logs from that database server to your Splunk installation? If an answer to all those questions is "yes", you should have your database activity logs in your Splunk environment and should be able to search through that data.
@meshorer what version of the platform are you using? In my 6.2 instance it's no longer an issue but I do recall it being an issue on a previous version or 2. 
1. appendcols just adds additional columns from the subsearch to the results of the main search without any correlation between the result sets. It just "glues" them together in the order return by r... See more...
1. appendcols just adds additional columns from the subsearch to the results of the main search without any correlation between the result sets. It just "glues" them together in the order return by respective searches. So it's usually not the best possible idea. The command has its uses but they are very rare. 2. Running real-time searches is generally not the best idea - it allocates a single CPU across every indexer participating in the search as well as your search-head. Also real-time searches have a lot of limitations (and you can only use some of the commands in your searches).
From Splunk, can I see the queries that have been implemented in the database? like update, delete, insert, etc.?
No. In order to manage a cluster a server must be cluster master. And that's all. In rare cases (in small environments) it can fulfill other roles as well but in general case it shouldn't do anythin... See more...
No. In order to manage a cluster a server must be cluster master. And that's all. In rare cases (in small environments) it can fulfill other roles as well but in general case it shouldn't do anything else. The SH deployer while it's theoretically also recommended to host it on a separate machine, it can be joined with another role (for example a small deployment server) since it's not very busy during normal SHC operations - it just pushes a SH config budle once in a while and that's it. Apart from that, the SHC is self-governing. Generally speaking - indexers are those components that are indexing data. And that's it. No other components should have the role of indexer assigned to them (some people try to assign indexer role to HFs since they are processing the data and there is no separate role for them) and in a well-designed environment no other servers than indexers should do local indexing. Search-heads are those components you can run searches against. You don't have to have searching capability on any other components than SHs and Monitoring Console (although you shouldn't use MC for "production" searching of course). In fact it's not uncommon to have CM/SH deployer/DS configured without webui (but yes, theoretically speaking, you can still dispatch searches via REST calls). I don't remember if all servers don't identify by default as search-heads since you can run a search against them - haven't set up a new MC for a while. So CM should defintely not have at least some of those roles. But just because your MC is apparently misconfigured it doesn't mean that your CM does all that. Firstly find out what your CM really does.
Hi, attached is a screen shot to show you what I mean is there a way to change the color if the text or the background in the console output? @inventsekar  @phanTom 
Correct, this is what is listed in the Monitoring Console as having all these roles. Our setup is as follows: 2 sites 9 Search Heads, clustered (5 in 1 site, 4 in the other) 8 indexers, clustered... See more...
Correct, this is what is listed in the Monitoring Console as having all these roles. Our setup is as follows: 2 sites 9 Search Heads, clustered (5 in 1 site, 4 in the other) 8 indexers, clustered (split evenly) 2 heavy forwarders (site 1 only) 1 cluster master I believe documentation mentions somewhere that in order to have a cluster master orchestrate clusters, it needs to take on the role of the cluster it's trying to orchestrate, ie search head or indexer. It doesn't actually fulfil those roles.
Hi @tlmayes, bug and known issues are synonyms. doe Splunk Support give an indication about when the known issue will be solved? Anyway, let me know if I can help you more. Ciao. Giuseppe P.S.:... See more...
Hi @tlmayes, bug and known issues are synonyms. doe Splunk Support give an indication about when the known issue will be solved? Anyway, let me know if I can help you more. Ciao. Giuseppe P.S.: Karma Points are appreciated
append is used for historical data but my data in real time so please suggest
Hello Team, as we delve into Splunk Attack Range 3.0, we're interested in understanding the MITRE ATT&CK tactics and techniques that can be simulated within this environment. If you have information ... See more...
Hello Team, as we delve into Splunk Attack Range 3.0, we're interested in understanding the MITRE ATT&CK tactics and techniques that can be simulated within this environment. If you have information on this, kindly share it with us. Thank you!
I have this query which is working as expected. There are two different body axs_event_txn_visa_req_parsedbody and axs_event_txn_visa_rsp_formatting and common between two is F62_2 (eventtype =axs_e... See more...
I have this query which is working as expected. There are two different body axs_event_txn_visa_req_parsedbody and axs_event_txn_visa_rsp_formatting and common between two is F62_2 (eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]") OR eventtype=axs_event_txn_visa_rsp_formatting | rex field=_raw "(?s)(.*?FLD\[Acquiring Institution.*?DATA\[(?<F19>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Authentication Program.*?DATA\[(?<FCO>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Response Code.*?DATA\[(?<VRC>[^\]]*).*)" | stats values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp, values(F19) as F19, values(FCO) as FCO, values(VRC) as VRC by F62_2 | where F19!=036 AND FCO=01 now lets say i want to rewrite this query using appendcol/substring. something like this. TID from axs_event_txn_visa_req_parsedbody the resulted output should be passing to another query so i can corresponding log For example Table -1  Name Emp-id Jayesh 12345 Table Designation Emp-id Engineer 12345 use Emp-id from table-1 and get the destination from table-2, similarly TID is the common field between two index, i want to fetch VRC using TID from Table-1 index=au_axs_common_log source=*Visa* "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<TID>[^\]]*).*)" |appendcols search [ index=au_axs_common_log source=*Visa* "FORMATTING:" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<TID>[^\]]*).*)" |rex field=_raw "(?s)(.*?FLD\[Response Code.*?DATA\[(?<VRC>[^\]]*).*)" | stats values(VRC) as VRC by TID ]
If the dropdown is populated by a search, look at the search being run and then you can determine if that user can see the data being searched. Ask the user to manually run the search. Look at any ... See more...
If the dropdown is populated by a search, look at the search being run and then you can determine if that user can see the data being searched. Ask the user to manually run the search. Look at any index constraints for the roles the user belongs to.  
Hello, I have a dashboard where the drop down list is working for me as i have splunk admin access where as the same drop down list is not populating for a user with splunk user level access. How d... See more...
Hello, I have a dashboard where the drop down list is working for me as i have splunk admin access where as the same drop down list is not populating for a user with splunk user level access. How do i need to troubleshoot this issue? Thanks
Hi, I am creating a Dashboard and using the Dashboard Studio template, and previously I developed a Splunk Visualization. How can I define a Splunk Visualization on Dashboard Studio? Because by d... See more...
Hi, I am creating a Dashboard and using the Dashboard Studio template, and previously I developed a Splunk Visualization. How can I define a Splunk Visualization on Dashboard Studio? Because by default, I can only choose from the available Splunk Visualizations that Splunk has provided.
I found a shorter example to display the result set.  Thank you for your efforts dtburrows3. |  bin span=10m _time  |  eval minute=strftime(_time,"%M") |  where minute>54 OR minute<6 |  stats cou... See more...
I found a shorter example to display the result set.  Thank you for your efforts dtburrows3. |  bin span=10m _time  |  eval minute=strftime(_time,"%M") |  where minute>54 OR minute<6 |  stats count by _time  
I also was looking for something that did this for a really long time and could never find anything.  I know about the CMD+SHIFT+E to expand macros on the UI but needed the same functionality inline... See more...
I also was looking for something that did this for a really long time and could never find anything.  I know about the CMD+SHIFT+E to expand macros on the UI but needed the same functionality inline in a search to use for meta-analysis (breaking down SPL to its components and analyzing). I feel like there is some way of doing this that exists somewhere but have not had much luck finding it.  So went ahead and tried making a custom command to do it and it actually seems to work out pretty well. I do want to note that this custom command is recursive in a sense that it expands the macros all the way down. Meaning that if there are nested macros that this will expand the nested ones as well all the way unil there are no more macros to expand. So end result should be a fully detailed SPL that is being executed. It will also replace the input args with the values it finds in the input field so it will also return that SPL that would run for that specific search with the given arguments. You can see an example of the output here (this particular example is derived from a dashboard, so input arguments are still tokenized and will be represented as such in the "expanded_spl" field): If you are still interested in this than you can give this a try, I think it will require entries in a commands.conf, searchbnf.conf metadata/local.meta and a custom python script in bin/ There is also a dependency on Splunk Python SDK. Send me a message and I can get it packed up in a custom app to share if you still are needing this functionality.
Hi, Does anyone out there use any archiving software to monitor, report and manage frozen bucket storage in an on-prem archive storage location? I have found https://www.conducivesi.com/splunk-arch... See more...
Hi, Does anyone out there use any archiving software to monitor, report and manage frozen bucket storage in an on-prem archive storage location? I have found https://www.conducivesi.com/splunk-archiver-vsl to fit our requirements but we are interested in looking at other options.  Thanks 
Hi,  I would like to use it as an alert, but a bit confused the trigger index=_internal group=per_index_thruput source=*metrics.log NOT series=_* | eval last_seen=now()-_time | stats max(last_seen)... See more...
Hi,  I would like to use it as an alert, but a bit confused the trigger index=_internal group=per_index_thruput source=*metrics.log NOT series=_* | eval last_seen=now()-_time | stats max(last_seen) as seconds_since_seen by series | rename series as index | where seconds_since_seen < 120 Specifically, a value for the 'seconds_since_seen', if most indices are about the 800 second range, I am not sure if a low value like 120 seconds going to cause false positives. Any suggestions for a proper value to monitor indices would be greatly appreciated. Cheers, Paul  
You can try utilizing a foreach mode=multivalue loop to gather deltas between the timestamps and then do descriptive statistics around the new delta MV field. Something like this: <base_search> ... See more...
You can try utilizing a foreach mode=multivalue loop to gather deltas between the timestamps and then do descriptive statistics around the new delta MV field. Something like this: <base_search> ``` sorting event_time mvfield values ``` | eval event_time=mvsort(event_time) ``` initializing field current for the nex foreach loop ``` | eval current=mvindex(event_time, 0) ``` loop through each value in event_time and subtract the preceding value to get a delta ``` | foreach mode=multivalue event_time [ | eval tmp_delta='<<ITEM>>'-'current', delta=mvappend(delta, tmp_delta), current='<<ITEM>>' ] ``` removing these fields as they are no longer needed ``` | fields - current, tmp_delta | eval ``` stripping off first entry from delta mvfield since it will always be zero and skew stats ``` delta=mvindex(delta, 1, -1), ``` calculate avergae delta betwwen timestamps ``` avg_delta=avg(delta), ``` diff is a temp field to assist with evaluating the standard deviation s=√(Σ((delta-avg_delta)^2/(n-1))) ``` diff=mvmap(delta, 'delta'-'avg_delta'), diff_2=mvmap(diff, pow(diff, 2)), stdev_delta=sqrt(sum(diff_2)/(mvcount(diff_2)-1)), ``` evaluate variance ``` variance_delta=pow('stdev_delta', 2) ``` remove diff fields as they were temporary to calculate standard deviation ``` | fields - diff, diff_2 ``` zscore for more detail ``` | eval zscore_delta=mvmap(delta, (('delta'-'avg_delta')/'stdev_delta')) ``` human readable format (duration) of deltas for context ``` | eval stdev_duration=tostring(stdev_delta, "duration"), range_delta=max(delta)-min(delta), range_duration=tostring(range_delta, "duration") ``` just sorting fields list for final display ``` | fields + event_time, delta, zscore_delta, avg_delta, stdev_delta, variance_delta, range_delta, stdev_duration, range_duration  It should return you a table that looks like this.  
@mark_groenveld Is it that you simply want a single value representing the 10 minute period from x:55 to x+1:05, so you have one row per hour, e.g. index=main (earliest=01/08/2024:08:55:00 latest=01... See more...
@mark_groenveld Is it that you simply want a single value representing the 10 minute period from x:55 to x+1:05, so you have one row per hour, e.g. index=main (earliest=01/08/2024:08:55:00 latest=01/08/2024:09:05:00) OR (earliest=01/08/2024:09:55:00 latest=01/08/2024:10:05:00) OR (earliest=01/08/2024:10:55:00 latest=01/08/2024:11:05:00) | bin _time span=10m aligntime=earliest | stats count by _time | sort _time If so, it's just the aligntime=earliest in the bin command