All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @mobrien1 , I don't know the opinion of other colleagues, but I don't like to use indextime instead eventtime (replacing timestamp with _indextime in indexing), because in this way you loose the ... See more...
Hi @mobrien1 , I don't know the opinion of other colleagues, but I don't like to use indextime instead eventtime (replacing timestamp with _indextime in indexing), because in this way you loose the correlation time of your events, in other words if something. if instead you want to maintain the event timestamp and use the _indextime for the searches, my question is one: why? This request certainly comes from the need to compare the results with those of other SIEMs that used this modality, but in my opinion, Splunk's approach is more rigorous and effective, and anyway, you can add conditions using _indextime to your main searches (I did it for an Acceptance Test). Anyway, if something happened at a certain time, I need that information to analyze the events in that time period, possibly from other sources, which perhaps were indexed before or after. Anyway to answer your questions: I don't think there is an impact in terms of search performance. Regarding use in DataModels, I believe that there are no impacts because the DMs are populated based on the timestamp of the events. regarding the filters on _indextime, you can use them right away. Last information: never use "All time" in your searches. Ciao. Giuseppe
Hi @msarkaus , if the condition is fixed (SUCCEDED 34 chars and FAILED 39 chars), you could use an if condition in the eval command: <your_search> | eval msgTxt=if(match(msgTxt,"SUCCEDED"), substr(... See more...
Hi @msarkaus , if the condition is fixed (SUCCEDED 34 chars and FAILED 39 chars), you could use an if condition in the eval command: <your_search> | eval msgTxt=if(match(msgTxt,"SUCCEDED"), substr(msgTxt, 1, 34), substr(msgTxt, 1, 39) | stats count by msgTxt Ciao. Giuseppe
Hi @the_sigma, the props.conf seems to be ok, what's the issue you have? the only thing is that I don't see the TIME_FORMAT field. try saving a copy of your data in a text file and adding it using... See more...
Hi @the_sigma, the props.conf seems to be ok, what's the issue you have? the only thing is that I don't see the TIME_FORMAT field. try saving a copy of your data in a text file and adding it using the Add data function: in this way you can test your extraction and eventual updates. Ciao. Giuseppe  
Hi @nabeel652 , ok, please try this: <your_search> | autoregress status as status_old p=1 | table _time status status_old | eval NO=0 | foreach NO [ eval NO=if(status=status_old,NO,NO+1)] | ac... See more...
Hi @nabeel652 , ok, please try this: <your_search> | autoregress status as status_old p=1 | table _time status status_old | eval NO=0 | foreach NO [ eval NO=if(status=status_old,NO,NO+1)] | accum NO that I tested (and runs) in this way: | makeresults | eval _raw= "Online 1" | append [ | makeresults | eval _raw= "Online 1"] | append [ | makeresults | eval _raw= "Online 1"] | append [ | makeresults | eval _raw= "Break 2"] | append [ | makeresults | eval _raw= "Break 2"] | append [ | makeresults | eval _raw= "Online 3"] | append [ | makeresults | eval _raw= "Online 3"] | append [ | makeresults | eval _raw= "Lunch 4"] | append [ | makeresults | eval _raw= "Lunch 4"] | append [ | makeresults | eval _raw= "Lunch 4"] | append [ | makeresults | eval _raw= "Offline 5"] | append [ | makeresults | eval _raw= "Offline 5"] | rex "^(?<status>\w+)" | autoregress status as status_old p=1 | table _time status status_old | eval NO=0 | foreach NO [ eval NO=if(status=status_old,NO,NO+1)] | accum NO Ciao. Giuseppe
Ciao @riyastk , I am not aware that there have been any changes in the functioning of the timechart function; are you really sure that the data in the first case are the same as in the second? what... See more...
Ciao @riyastk , I am not aware that there have been any changes in the functioning of the timechart function; are you really sure that the data in the first case are the same as in the second? what happens if you launch the search before the timechart and count the occurrences by sight? In particular, check the values ​​that are indicated in your search as "Invalid". Then, since you use the eval command, why don't you use the results of this command instead of the "<2xx" condition? otherwise this command is useless. Ciao. Giuseppe
I have this query which is working well in Splunk8 whereas I am getting timechart with wrong values in Splunk9. Is there any chage in timchart or case function that may cause this query not to work p... See more...
I have this query which is working well in Splunk8 whereas I am getting timechart with wrong values in Splunk9. Is there any chage in timchart or case function that may cause this query not to work perfectly?   index=my_index sourcetype=jetty_access_log host="apiserver--*" url="/serveapi*" | eval status_summary=case(status<200, "Invalid",status<300, "2xx", status<400, "3xx",status <500, "4xx",status<600, "5xx",True(),"Invalid") | timechart span=5m count(eval(status_summary="2xx")) as count_http2xx, count(eval(status_summary="3xx")) as count_http3xx, count(eval(status_summary="4xx")) as count_http4xx, count(eval(status_summary="5xx")) as count_http5xx, count(eval(status_summary="Invalid")) as count_httpunk This screenshot below shows the correct result (Splunk   This screenshot shows the incorrect result ( Splunk 9)      
Hi @alberto-sirt, As @bowesmana said you need to use a lookup definition instead of querying the lookup file itself. You can refer to this example: https://docs.splunk.com/Documentation/Splunk/9.2... See more...
Hi @alberto-sirt, As @bowesmana said you need to use a lookup definition instead of querying the lookup file itself. You can refer to this example: https://docs.splunk.com/Documentation/Splunk/9.2.2/Knowledge/Addfieldmatchingrulestoyourlookupconfiguration#Example_of_using_match_type_for_IPv6_CIDR_match.
Hi @Siddharthnegi , You can use this query, this works if the field Hostname is there in both the lookups. | inputlookup first_lookup where ([| inputlookup second_lookup | fields Hostname])  
I have 2 lookups . first lookup have multiple fields including Hostname and the second lookup have only Hostname field . I want to find common Hostname from both lookups ,How can i do that?
Hi,  I have been working for a day or so on getting this to work.  Here is my web.conf file configuration  [settings] httpport = 443 loginBackgroundImageOption = custom loginCustomBackgroundIma... See more...
Hi,  I have been working for a day or so on getting this to work.  Here is my web.conf file configuration  [settings] httpport = 443 loginBackgroundImageOption = custom loginCustomBackgroundImage = search:logincustombg/Hub_Arial.jpg enableSplunkWebSSL = true privKeyPath = /opt/splunk/share/splunk/certs/decrypted-xxx_2025_wc.key serverCert = /opt/splunk/share/splunk/certs/splunk-wc-xxx-com.pem the key file is the decrypted certificate, the server cert is the server cert, the intermediate and root from godaddy g2.  I am currently running splunk 9.2.x  I am not finding a quick how to to complete this task and I am working if someone could give me a hand.  I am using Wireshark off the firefox browser and sometimes I see the client request but never see the server request.  I am not doing this from the command line or the web server, I am updating the files and restarting splunkd.  Thanks in advance for your assistance.  Please let me know if there are any questions.  
Hi all,   I am currently having trouble finding the steps on how to forward the Syslogs from an Aruba switch into Splunk. The Aruba switch is set up to forward the syslogs through the correct IP to... See more...
Hi all,   I am currently having trouble finding the steps on how to forward the Syslogs from an Aruba switch into Splunk. The Aruba switch is set up to forward the syslogs through the correct IP to Port 9997 which is the Splunk Default. My issue is that these Syslogs are not coming through or not visible. I have confirmed the computer can detect the switch and the switch sees the computer, Why are the syslogs not being forwarded? I have installed the Aruba Network Add-on for Splunk but the result has not changed. If someone know the correct steps to set this up would they be able to provide them? Any help is greatly appreciated. Kind regards, Ben 
Thank you I don't want to omit any records. This sort of gives me the required results but records are missing which I don't want. I want same number of rows after the solution is applied. 
Your lookup command is looking up file.csv, which is NOT the definition. The lookup file contains the data, the lookup definition is the lens through which you interpret the data in the file.  
I have the following pipe separated value file that I am having problems onboarding.  The first row is the column headers. Second row is sample data.  I'm getting an error when attempting to use the ... See more...
I have the following pipe separated value file that I am having problems onboarding.  The first row is the column headers. Second row is sample data.  I'm getting an error when attempting to use the CREA_TS column as the timestamps CREA_TS|ACTION|PRSN_ADDR_AUDT_ID|PRSN_ADDR_ID|PRSN_ID|SRC_ID 07102024070808|INSERT|165713232|147994550|101394986|OLASFL   This is what I have for props but I cannot get it to identify the timestamp.   Any help will be greatly appreciated. [ psv ] CHARSET=UTF-8 FIELD_DELIMITER=| HEADER_FIELD_DELIMITER=| INDEXED_EXTRACTIONS=psv KV_MODE=none SHOULD_LINEMERGE=false category=Structured description=Pipe-separated value format. Set header and other settings in "Delimited Settings" disabled=false pulldown_type=true LINE_BREAKER=([\r\n]+) TIMESTAMP_FIELDS=CREA_TS TIME_PREFIX=^ MAX_TIMESTAMP_LOOKAHEAD=14
I too, was seeing a similar message, with the GUID and IP of the UF that was supposedly having an issue.  Accompanying that, I was getting an email from an alert I'd set up for "UFs no longer sending... See more...
I too, was seeing a similar message, with the GUID and IP of the UF that was supposedly having an issue.  Accompanying that, I was getting an email from an alert I'd set up for "UFs no longer sending logs", and my monitoring console also showed it was missing. However, if I did a query for it on a search head, I was definitely still seeing current events coming in, and my deployment server said it was still checking in. This is in a mixed environment of the architectural Splunk components (MC, CM, DSLM, SHs, HFs, IDXs) running on Linux, and the majority if UFs running on Windows.  Due to my department, I do not have OS access to those Windows servers. As an experiment, I created a simple text file on the DS, set it to restart Splunkd, added it to new server class, and assigned only the problem UF client to it.  As expected, once the client got the file, the UF restarted and the symptoms went away. @PickleRickWould removing the tracker.log have solved the issue as well?  I had the admin, who had OS access to it, restart the UF, but it did not solve the issue.  Maybe him just restarting the UF wouldn't have been enough and would have just come back up using the same tracker.log?
You need to be able to identify which is the first event and which is the second. You can do this like so: | streamstats count as row
I'm trying to understand how Splunk apps that interface with other systems via an API get their API calls scheduled.  I'm not a Splunk app developer - just an admin curious about how the apps loaded ... See more...
I'm trying to understand how Splunk apps that interface with other systems via an API get their API calls scheduled.  I'm not a Splunk app developer - just an admin curious about how the apps loaded on my Splunk environment work.   When you install and configure an app that has a polling interval configured within it, how does Splunk normally control the periodic execution of that API call?  Does it use the Splunk scheduler, or does each app have to write some kind of cron service or compute a pause interval between each invocation of the API?   Seems to me like the scheduler would be the obvious choice, but I can't find anything in the Splunk documentation to tell me for certain that the scheduler is used for apps to periodically call polled features.
You want to show 34 characters because the event is first or because of some other reason and the event just happens to be first?
Check your config with btool. It's relatively easy to mistype pass4SymmKey (the name of the option) making it effectively not set.
What does he mean by @marnall  This isn't ideal becuase if you update the TA from Splunkbase in the future you will lose your changes What changes?