All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Where is it showing the same results? Is it a scheduled saved search and you are looking at recent results? Is it a saved search you run manually or from a dashboard?  
In your rex example you said | rex field=event.url ... that is why SOURCE_KEY is event.url - as that is where the urls are coming from right? Your rex example indicated you are extracting the url ... See more...
In your rex example you said | rex field=event.url ... that is why SOURCE_KEY is event.url - as that is where the urls are coming from right? Your rex example indicated you are extracting the url into a field called url_domain, which is also what is in the transforms.
If you have two events, you can't just match things between events - the text from loga does not exist when running the match statement for the logb data. Without seeing your SPL it's hard to know w... See more...
If you have two events, you can't just match things between events - the text from loga does not exist when running the match statement for the logb data. Without seeing your SPL it's hard to know what you are doing - can you post the entire SPL - please do this in a code block (</> button) If you have two events, you need to correlate them together using stats on a common field, in this case, your file name, so extract the file name from both events and then define a "message type" - log a or b and then you can do something like this logic | eval logtype=if(condition..., "loga", "logb") | rex "....(?<filename>....)" | stats count values(logtype) as logtypes min(_time) as StartTime max(_time) as EndTime by filename | where count>1 AND logtypes="loga" AND logtypes="logb"  
So with the "SOURCE_KEY = event.url" what I do is call the field where I want to get the information from?  In my case it would be the urls stored there.
You can do this in the UI - go to Settings->Fields-Field Transformations and add the regex and the field you want to extract from and then in Field Extractions add a new Extraction using transforms a... See more...
You can do this in the UI - go to Settings->Fields-Field Transformations and add the regex and the field you want to extract from and then in Field Extractions add a new Extraction using transforms and reference the Field Transformation. This will translate to something like this in props/transforms conf files In transforms.conf you will need   [url_domain] CLEAN_KEYS = 0 REGEX = ^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+) SOURCE_KEY = event.url    In props.conf    [sec-web] REPORT-file_name = url_domain    
Yes log b and log a have the same index=a env=a account=. SPL -----> rex field=_raw "The file has been found at the second destination[a-zA-Z0-9]+\/(?<filename2>[^\"]*)" This works I get the file n... See more...
Yes log b and log a have the same index=a env=a account=. SPL -----> rex field=_raw "The file has been found at the second destination[a-zA-Z0-9]+\/(?<filename2>[^\"]*)" This works I get the file names. This is exactly the logs that I am trying to match, I was using if(like....) at first.               log a:  There is a file has been received with the name test2.txt lob b:  The file has been found at the second destination C://user/test2.txt
Does logb come from "index=a env=a account="? If not, then you need to search both data sets to find loga and logb. I am not sure what your SPL  |field filename from log b | field filename2|  is ... See more...
Does logb come from "index=a env=a account="? If not, then you need to search both data sets to find loga and logb. I am not sure what your SPL  |field filename from log b | field filename2|  is doing, as that's not SPL. your match statement is not valid either, you are using SQL wildcards (%) - match takes regular expressions. Can you give an example of your data that you'd like to match
You have not done what I suggested, you have changed the SPL to something that will not work. Please use the exact code I provided after your timechart command  
Here's a simple example <form version="1.1"> <label>HostDropdown</label> <fieldset submitButton="false"> <input type="dropdown" token="hosts" searchWhenChanged="true"> <label>Host Type... See more...
Here's a simple example <form version="1.1"> <label>HostDropdown</label> <fieldset submitButton="false"> <input type="dropdown" token="hosts" searchWhenChanged="true"> <label>Host Types</label> <choice value="prodhost*">Production</choice> <choice value="qahost*">QA</choice> <choice value="testhost*">Test</choice> </input> </fieldset> <row> <panel> <table> <search> <query> index=aaa source="/var/log/test1.log" host=$hosts$ |stats count by host </query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> I suggest you look at this and have a look through the documentation that describes this https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML  
I have two logs below, log a is throughout the environment and would be shown for all users.  log b is limited to specific users.  I only need times for users in log b. log a:  There is a file has ... See more...
I have two logs below, log a is throughout the environment and would be shown for all users.  log b is limited to specific users.  I only need times for users in log b. log a:  There is a file has been received with the name test2.txt lob b:  The file has been found at the second destination C://user/test2.txt I am trying to write a query that captures the time between log a and log b without doing a subsearch, so far I have  index=a, env=a, account=a ("There is a file" OR "The file has been found")|field filename from log b | field filename2| eval Endtime = _time | ****Here is where I am lost, I was hoping to use if/match/like/eval to see to capture the start time where log b filename can be found in log a.  I have this so far******   | eval Starttime = if(match(filename,"There is%".filename2."%"),_time,0) I am not getting any 1s, just 0s.  I am pretty sure this is the problem "There is%".filename2."%", how do I correct it.
Thank you @marnall and @yuanliu . Yes, multiselect provides the same functionality but I was going for the same look and feel. Simple XML vs Studio:  
The event.url field stores all the urls found in the logs, I want to create a new field called url_domain that only captures the domain of the urls stored in event.url, temporarily what I do is from ... See more...
The event.url field stores all the urls found in the logs, I want to create a new field called url_domain that only captures the domain of the urls stored in event.url, temporarily what I do is from the search write the following: | rex field=event.url "^(?:https?:\/\/)?(?:www[0-9]*\.)?(?)(?<url_domain>[^\n:\/]+)" What should I add in the props.conf so that this instruction is fixed for the sourcetype "sec-web"?
So, I created at savedsearch and it was working fine. But I had to change the SPL for it and I tried it again, and it is still showing the old results and not showing the new SPL changes. Why? Do I h... See more...
So, I created at savedsearch and it was working fine. But I had to change the SPL for it and I tried it again, and it is still showing the old results and not showing the new SPL changes. Why? Do I have to wait for the changes t happen?
Hello Fellow Splunkers, I'm fairly new to ITSI and was wondering if this could be achieved. I 'm looking to create a report which would allow me to list all Services I have in ITSI along with th... See more...
Hello Fellow Splunkers, I'm fairly new to ITSI and was wondering if this could be achieved. I 'm looking to create a report which would allow me to list all Services I have in ITSI along with their associated entities as well as list associated alerts or severity. Is there a query that could achieve this? any pointers are very much appreciated! Also any pointers where I could potentially find the data and bring it together in a search would be very helpful too. Thanks!
@tscroggins, Is the suggested configuration restricted to certain Splunk Versions?, because we have tried different options but we are not seeing the CSV formated as expected also the instances ... See more...
@tscroggins, Is the suggested configuration restricted to certain Splunk Versions?, because we have tried different options but we are not seeing the CSV formated as expected also the instances were restarted.   Thanks in advance, we have ran the reports simple as possible. e.g.: "index=os earliest=-5m |timechart span=1m values(host)" Regards
As in outlook.com ? If so, there is an article here describing how to connect to it via SMTP: https://support.microsoft.com/en-us/office/pop-imap-and-smtp-settings-for-outlook-com-d088b986-291d-42b8-... See more...
As in outlook.com ? If so, there is an article here describing how to connect to it via SMTP: https://support.microsoft.com/en-us/office/pop-imap-and-smtp-settings-for-outlook-com-d088b986-291d-42b8-9564-9c414e2aa040 Enter the required credentials to your Splunk email settings, and it should work.
Could you try this SEDCMD in the props.conf file? (Make sure that the stanza is changed to match the sourcetype of the logs) [your_sourcetype] SEDCMD-maskpasswords = s/password: ([^;]+);cpassword: (... See more...
Could you try this SEDCMD in the props.conf file? (Make sure that the stanza is changed to match the sourcetype of the logs) [your_sourcetype] SEDCMD-maskpasswords = s/password: ([^;]+);cpassword: ([^;]+);/password: ####;cpassword: ####;/g  
@marnall - @richgalloway  solution is creating false alerts and triggering alerts even when servers are up. Can you provide any other solution in such way that alert is not triggered in maintenance w... See more...
@marnall - @richgalloway  solution is creating false alerts and triggering alerts even when servers are up. Can you provide any other solution in such way that alert is not triggered in maintenance window even though servers are down but alert gets only triggered outside window with condition atleast one of the server is down
How do I take a dashboard global time (i.e. - $global_time.earliest$, $global_time.latest$) and convert it into a date to be used when searching a lookup file that only has a date column (i.e. - 04/1... See more...
How do I take a dashboard global time (i.e. - $global_time.earliest$, $global_time.latest$) and convert it into a date to be used when searching a lookup file that only has a date column (i.e. - 04/15/2024)?
Not that I've seen. I've reached out to the developer and have a case logged with Splunk so I'll post once there's an update