All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello everyone ,  I want to filter data for a specific keyword "Snapshot created successfully " from a log file but i am getting other events also along with the searched keywords. My entries in pr... See more...
Hello everyone ,  I want to filter data for a specific keyword "Snapshot created successfully " from a log file but i am getting other events also along with the searched keywords. My entries in props.conf and transform.conf is as below :   props.conf [sourcetype] TRANSFORMS-filter = stanza transforms.conf [stanza] REGEX = "Snapshot created successfully" DEST_KEY = queue FORMAT = indexqueue Is there any issue here ?
It is somewhat confusing what that mvexpand is supposed to do and why string merge is necessary.  As I last commented in your other post, there is nothing wrong with Splunk's left join.  Even though ... See more...
It is somewhat confusing what that mvexpand is supposed to do and why string merge is necessary.  As I last commented in your other post, there is nothing wrong with Splunk's left join.  Even though I want to avoid join in general, join is better than doing all that extra work.  Here is my emulation:   | makeresults format=csv data="ip_address, host 10.1.1.1, host1 10.1.1.2, host2 10.1.1.3, host3 10.1.1.4, host4 10.1.1.5, host5 10.1.1.6, host6 10.1.1.7, host7" | rename ip_address as ip | join max=0 type=left ip [makeresults format=csv data="ip, risk, score, contact 10.1.1.1, riskA, 6, , 10.1.1.1, riskB, 7 , 10.1.1.1, ,, person1, 10.1.1.1, riskC, 6,, 10.1.1.2, ,, person2, 10.1.1.3, riskA, 6, person3, 10.1.1.3, riskE, 7, person3, 10.1.1.4, riskF, 8, person4, 10.1.1.8, riskA, 6, person8, 10.1.1.9, riskB, 7, person9"] | table ip, host, risk, score, contact   The output is ip host risk score contact 10.1.1.1 host1 riskA 6   10.1.1.1 host1 riskB 7   10.1.1.1 host1     person1 10.1.1.1 host1 riskC 6   10.1.1.2 host2     person2 10.1.1.3 host3 riskA 6 person3 10.1.1.3 host3 riskE 7 person3 10.1.1.4 host4 riskF 8 person4 10.1.1.5 host5       10.1.1.6 host6       10.1.1.7 host7       Hope this helps. (And thanks for posting data emulation.  That makes things easier.)
For example <html> <button data-token-json="{&quot;my_token&quot;:&quot;My Value&quot;">Set the my_token token to My Value</button> </html> and you can then use the $my_token$ e... See more...
For example <html> <button data-token-json="{&quot;my_token&quot;:&quot;My Value&quot;">Set the my_token token to My Value</button> </html> and you can then use the $my_token$ elsewhere in your dashboard.
It's a really useful piece of JS that allows you to put HTML buttons into your dashboard that can set and unset tokens in use in the dashboard. 
Hi @elend , your two searches are completely different, so it's normal to have different results. probably in the additional fields that you usend in the second search, there's some empty value, so... See more...
Hi @elend , your two searches are completely different, so it's normal to have different results. probably in the additional fields that you usend in the second search, there's some empty value, so for this reason the related results are discarded in the second search results. In other words, you cannot compare these two searches. to really compare them, you should modify the DataModel rules adding a calculated field that when there's an empty value for each field, it adds e fixed value (e.g.: "unknown"), as you can find for the user field in the authentication data model. Ciao. Giuseppe
Hi @dbroggy , are you speaking of Security Correlation Searches or what else? is Correlation Searches, install the Splunk Security Essentials App (https://splunkbase.splunk.com/app/3435  there's ... See more...
Hi @dbroggy , are you speaking of Security Correlation Searches or what else? is Correlation Searches, install the Splunk Security Essentials App (https://splunkbase.splunk.com/app/3435  there's a very comprehensive list of Correlation Searches, and it permit also an analysis of your data to understand which of them are applicable to your data and givie you also a test set of data to see these Correlation Searches in action. Ciao. Giuseppe
Hi @jagan_vannala , as also @marnall  said, your question is too vague: which kind of logs are you speaking of? did you already ingested or do you have to index them? i you already indexed them, y... See more...
Hi @jagan_vannala , as also @marnall  said, your question is too vague: which kind of logs are you speaking of? did you already ingested or do you have to index them? i you already indexed them, you must know index and sourcetype of them. If you have to index them, see at https://docs.splunk.com/Documentation/SplunkCloud/8.1.10/Data/Getstartedwithgettingdatain the ways to ingest and to index logs. Ciao. Giuseppe
We're seeing a similar issue with ES7.3.1 running on Core 9.3.0.   The spathannotates custom command is failing with the same "module 'time' has no attribute 'clock'" error and, like @cbrewer_splunk... See more...
We're seeing a similar issue with ES7.3.1 running on Core 9.3.0.   The spathannotates custom command is failing with the same "module 'time' has no attribute 'clock'" error and, like @cbrewer_splunk, I traced it back to  ./SA-Utils/lib/SolnCommon/cexe.py We'll upgrade to ES7.3.2 (latest version at this point) but I'm not convinced that the issue will be fixed based on the Fixed Issues for ES7.3.2 - https://docs.splunk.com/Documentation/ES/7.3.2/RN/FixedIssues
Hi @NK , I suppose that you're using the Splunk_TA_nix add-on to ingest the Linux logs, if not, use it! You have to enable the [script://./bin/netstat.sh] input. In this way, you'll have the same ... See more...
Hi @NK , I suppose that you're using the Splunk_TA_nix add-on to ingest the Linux logs, if not, use it! You have to enable the [script://./bin/netstat.sh] input. In this way, you'll have the same information of Windows. Ciao. Giuseppe
Hi @jwhughes58 , instead of using the lookup, why don't you dedup for all fields contained in your events? or take a portion of _raw (excluding the timestamp) and dedup fot it? Ciao. Giuseppe
Hi @jessieb_83 , there's no reason to insert thawedHomePath in a volume because it's a mount point to use when you have to remount discarded buckets: hawedPath = <string> * An absolute path that co... See more...
Hi @jessieb_83 , there's no reason to insert thawedHomePath in a volume because it's a mount point to use when you have to remount discarded buckets: hawedPath = <string> * An absolute path that contains the thawed (resurrected) databases for the index. * CANNOT contain a volume reference. * Path must be writable. * Required. Splunkd does not start if an index lacks a valid thawedPath. * You must restart splunkd after changing this setting for the changes to take effect. Reloading the index configuration does not suffice. * Avoid the use of environment variables in index paths, aside from the exception of SPLUNK_DB. See 'homePath' for additional information as to why. as you can read at https://github.com/jewnix/splunk-spec-files/blob/master/indexes.conf.spec  This is a manual action thet you do only on request, it isn't a continue action. If you want to maintain in-line discarded logs, enlarge your retention period and maintain them in Cold state. Ciao. Giuseppe
Hi @salavi , are you speaking of a filter at index or search time? if at search time, you can put the list in a lookup and use it for the search: <your_search> [ | inputlookup your_lookup.csv | fi... See more...
Hi @salavi , are you speaking of a filter at index or search time? if at search time, you can put the list in a lookup and use it for the search: <your_search> [ | inputlookup your_lookup.csv | fields HostName] you can eventually refresh your lookup  taking values from a scheduled search: <your_lookup_search> | dedup HostName | table HostName | outputlookup your_lookup.csv that you can schedule e.g. every hour. Ciao. Giuseppe
allright, i still learn for this tstats queries.  my update for this issue is still struggling to match values from one queries for display in dashboard to its event details. I assumed it cause the ... See more...
allright, i still learn for this tstats queries.  my update for this issue is still struggling to match values from one queries for display in dashboard to its event details. I assumed it cause the parameter from the query. but in other side i want show other field related with the event, even that field empyt.  I open this issue here: Best approach using tstats for splunk dashboard an... - Splunk Community
Hello @yuanliu  Can you please help on my other question? This is more closer to the real data and it has complexity since it's involving multiple fields https://community.splunk.com/t5/Splunk-S... See more...
Hello @yuanliu  Can you please help on my other question? This is more closer to the real data and it has complexity since it's involving multiple fields https://community.splunk.com/t5/Splunk-Search/How-do-I-quot-Left-join-quot-by-appending-CSV-to-an-index-in/m-p/696908#M236826 I think I solved this, but I wonder if there is a way to do it without merging the string and mvexpand I appreciate your help.  Thank you so much
Hello, How do I "Left join" by appending CSV to an index in multiple fields? I was able to solve the problem, but 1) Is it possible to solve this problem without string manipulation and mvexpand... See more...
Hello, How do I "Left join" by appending CSV to an index in multiple fields? I was able to solve the problem, but 1) Is it possible to solve this problem without string manipulation and mvexpand? (see the code) Mvexpand caused slowness 2) Can "stats value" NOT remove the duplicate?     In this case, stats values (*) as * by ip, it merged field "risk and "score" and removed the duplicates. My workaround is to combine the string to retain the duplicates. 3) a) Why does "stats value" ignore empty string?         b)  Why adding Null into non-null string will result empty?   I have to use fillnull in order to retain the data. Please review the sample data, drawing and the code Thank you for your help.!! host.csv ip_address host 10.1.1.1 host1 10.1.1.2 host2 10.1.1.3 host3 10.1.1.4 host4 10.1.1.5 host5 10.1.1.6 host6 10.1.1.7 host7 index=risk ip risk score contact 10.1.1.1 riskA 6   10.1.1.1 riskB 7   10.1.1.1     person1 10.1.1.1 riskC 6   10.1.1.2     person2 10.1.1.3 riskA 6 person3 10.1.1.3 riskE 7 person3 10.1.1.4 riskF 8 person4 10.1.1.8 riskA 6 person8 10.1.1.9 riskB 7 person9 "Left join" expected output - yellow and green rectangle (see drawing below) ip host risk score contact 10.1.1.1 host1 riskA 6   10.1.1.1 host1 riskB 7   10.1.1.1 host1     person1 10.1.1.1 host1 riskC 6   10.1.1.2 host2     person2 10.1.1.3 host3 riskA 6 person3 10.1.1.3 host3 riskE 7 person3 10.1.1.4 host4 riskF 8 person4 10.1.1.5 host5       10.1.1.6 host6       10.1.1.7 host7             | makeresults format=csv data="ip_address, host 10.1.1.1, host1 10.1.1.2, host2 10.1.1.3, host3 10.1.1.4, host4 10.1.1.5, host5 10.1.1.6, host6 10.1.1.7, host7" | eval source="csv" | rename ip_address as ip | append [makeresults format=csv data="ip, risk, score, contact 10.1.1.1, riskA, 6, , 10.1.1.1, riskB, 7 , 10.1.1.1, ,, person1, 10.1.1.1, riskC, 6,, 10.1.1.2, ,, person2, 10.1.1.3, riskA, 6, person3, 10.1.1.3, riskE, 7, person3, 10.1.1.4, riskF, 8, person4, 10.1.1.8, riskA, 6, person8, 10.1.1.9, riskB, 7, person9" | fillnull score value=0 | fillnull risk, score, contact value="N/A" | eval source="index"] | eval strmerged = risk + "," + score + "," + contact | stats values(*) as * by ip | mvexpand strmerged | eval temp = split(strmerged,",") | eval risk = mvindex(temp, 0) | eval score = mvindex(temp, 1) | eval contact = mvindex(temp, 2) | search (source="csv" AND source="index") OR (source="csv") | table ip, host, risk, score, contact        
Hello Guys, I wonder if there's any query that can list the mapping information between the existing  data models and indexes? I would like to use these info to set index constrains for data models ... See more...
Hello Guys, I wonder if there's any query that can list the mapping information between the existing  data models and indexes? I would like to use these info to set index constrains for data models to speed up searching. Thanks & Regards, Iris
Great, thank you bowesmana. It is working as expected just that can't get to see value on avg. graph. I tried to turn on "show data" option with min/max option which shows value on log graph but not ... See more...
Great, thank you bowesmana. It is working as expected just that can't get to see value on avg. graph. I tried to turn on "show data" option with min/max option which shows value on log graph but not on avg. value graph. Do you have any suggestion to get it done? Appreciate your support. Thanks
@marnallIs there any other way to unlock the account other than contacting Splunk support? Since I don't have a paid plan, I don't think I can access support directly.  I still have my free trial per... See more...
@marnallIs there any other way to unlock the account other than contacting Splunk support? Since I don't have a paid plan, I don't think I can access support directly.  I still have my free trial period left and I'd like to test a few more features.
So my manager needs to verify who was on call for certain days in order to pay them appropriately. Generally I would think there was some basic way to do this with Splunk on call.. However, it app... See more...
So my manager needs to verify who was on call for certain days in order to pay them appropriately. Generally I would think there was some basic way to do this with Splunk on call.. However, it appears that there is no way to do this (to my knowledge) Our company pays approx 60K USD for this service and I have to come here in order to ask a question and get support because when I attempt to log a ticket , the form cannot populate the instance section preventing me from submitting it. (separate issue - likely a dark pattern to avoid dealing with customer concerns as much as possible) Things I've tried - viewing the schedule.. nope only show the current week Getting a report, SHIRLYY this will work - turns out no, its just a summary of hours , lovely no dates attached I know! importing the .ics file into my calendar that hasss to work...Yet again nothing, zero , donuts... no historical data How on earth can I get a simple historical report saying who was actually on call for my schedule for what dates..
I have uploaded a Universal Forwarder to my Windows VM and configured both the inputs.conf and outputs.conf. I can confirm that the outputs.conf is working because the following logs are showing up i... See more...
I have uploaded a Universal Forwarder to my Windows VM and configured both the inputs.conf and outputs.conf. I can confirm that the outputs.conf is working because the following logs are showing up in splunk: [WinEventLog://Application] disabled = 0 [WinEventLog://Security] disabled = 0 [WinEventLog://System] disabled = 0 However, logs under Applications and Services Logs are not showing up: [WinEventLog://Directory Service] disabled = 0 [WinEventLog://DNS Server] disabled = 0 I have checked the Event Viewer to confirm that there are logs. The only difference that I see is that in the Event Viewer, the logs that are showing are in the path: Event Viewer (Local) -> Windows Logs ->  and the ones that are not showing are in the path: Event Viewer (Local) -> Applications and Services Logs -> my inputs.conf file: host = <full computer name> [WinEventLog://Application] disabled = 0 [WinEventLog://Security] disabled = 0 [WinEventLog://System] disabled = 0 [WinEventLog://Directory Service] disabled = 0 [WinEventLog://DNS Server] disabled = 0