All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

close enough.. I got it down to showing it dates and what I needed in correct order but from right to left using the below | inputlookup running_data.csv | eval _time=strptime(Date, "%m/%d/%Y") | s... See more...
close enough.. I got it down to showing it dates and what I needed in correct order but from right to left using the below | inputlookup running_data.csv | eval _time=strptime(Date, "%m/%d/%Y") | sort 0 -_time | timechart span=1d sum(sats) as sats by team useother=false limit=0 | fillnull value=0 | tail 12 | eval Date=strftime(_time, "%m/%d/%Y") | fields - _* | transpose 12 header_field=Date | rename column as team  
Hi @steve32507, what's your version? anyway, here youcan find the upgrade procedure for Splunk Enterprise. and here you can find the Universal Forwarders upgrade https://docs.splunk.com/Documentat... See more...
Hi @steve32507, what's your version? anyway, here youcan find the upgrade procedure for Splunk Enterprise. and here you can find the Universal Forwarders upgrade https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/HowtoupgradeSplunk  procedure https://docs.splunk.com/Documentation/Forwarder/9.1.2/Forwarder/Upgradetheuniversalforwarder  The steps depend on the starting version. Ciao. Giuseppe
Hi, I have the below scenario. please could you help?   spl1: index=abc sourcetype=1.1 source=1.2 "downstream" "executioneid=*"  spl2: index=abc sourcetype=2.1 source=2.2 "do not writ... See more...
Hi, I have the below scenario. please could you help?   spl1: index=abc sourcetype=1.1 source=1.2 "downstream" "executioneid=*"  spl2: index=abc sourcetype=2.1 source=2.2 "do not write to downstream" "executioneid=*" both the spl uses the same index and they have the common field called executionid.  some execution ids are designed not to go to downstream application in the flow. I want to combine these two spl based on the executioneid  
It's not clear what you want to do.  Are you trying to tell Splunk how to know a file it is reading is an Apache log?  Or are you trying to determine if some search results contain Apache logs?  Some... See more...
It's not clear what you want to do.  Are you trying to tell Splunk how to know a file it is reading is an Apache log?  Or are you trying to determine if some search results contain Apache logs?  Something else?
The search results will be retained on the search head for 7+ days.  That means disk space will be consumed and not released until the search expires.  The role's disk quota also will be consumed, wh... See more...
The search results will be retained on the search head for 7+ days.  That means disk space will be consumed and not released until the search expires.  The role's disk quota also will be consumed, which may prevent future searches from running.
How do I upgrade Splunk Enterprise & Universal Forwarder to 9.0 or higher?
Hi @steve32507, could you better describe your question? it isn't readable. Ciao. Giuseppe
How do I remediate this vulnerability? Tenable 164078  Upgrade Splunk Enterprise or Universal Forwarder to version 9.0 or later.
Hi @dtburrows3 ,    Thanks for your response!    If we need to add those two lined in a single search of macro, where we are receiving Type as a token from Search/dashboard, How to do that?      ... See more...
Hi @dtburrows3 ,    Thanks for your response!    If we need to add those two lined in a single search of macro, where we are receiving Type as a token from Search/dashboard, How to do that?       I tried this  way, It dosen't work | where if(macth('Type', "ADZ"), "match(Assetname, \"^\\S{2}Z\")", "isnotnull(Assetname)") Thanks in Advance!
Hello. I have a problem with Splunk Dashboard Studio table. Sometimes after refreshing the table, when the content is reloaded, column widths become random, some are too wide, some are too narrow, ev... See more...
Hello. I have a problem with Splunk Dashboard Studio table. Sometimes after refreshing the table, when the content is reloaded, column widths become random, some are too wide, some are too narrow, even though, there is a lot of blank space. That makes the content not fit to the table and a scroll bar appears (an example what it looks like can be seen below) It does not happen all the time, only occasionally, I was not able to determine what it depends on. After sorting the table by one of the columns, everything goes back to normal, column widths become even and the content does not overflow anymore (example what the table should look like can be seen below) Note, that I have set a static width for the first column. I have tried removing it but that does not seem to help much. It seems like column widths still get messed up. Does anyone have any suggestions, what could be causing this? I would like to avoid setting static widths for all columns if possible because in some situations, the total number of columns can be different. I am using Splunk Enterprise v9.1.1
@dtburrows3  this query showing date &time haphazardly, how to sort it like 1/4/2024, 1/3/2024, 1/2/2024.... index="*" source="*" |eval timestamp=strftime(_time, "%m/%d/%Y") | chart limit=30 ... See more...
@dtburrows3  this query showing date &time haphazardly, how to sort it like 1/4/2024, 1/3/2024, 1/2/2024.... index="*" source="*" |eval timestamp=strftime(_time, "%m/%d/%Y") | chart limit=30 count as count over DFOINTERFACE by timestamp  
I have a sample log file from Apache, now how can I identify it with Splunk that this log is really an Apache log are there a tools or any method for that ?
Hi, Yesterday I upgraded a splunk instance from 8.2.6 to 9.1.2. Afterwards all users that have the role "user" are logging every 10 milliseconds this log: 01-04-2024 08:53:44.220 +0000 INFO Au... See more...
Hi, Yesterday I upgraded a splunk instance from 8.2.6 to 9.1.2. Afterwards all users that have the role "user" are logging every 10 milliseconds this log: 01-04-2024 08:53:44.220 +0000 INFO AuditLogger - Audit:[timestamp=01-04-2024 08:53:44.220, user=test_user, action=admin_all_objects, info=denied ] This issue is filling the index _audit very fast and I had to reduce the index size as a workaround but I doesn't resolve the problem. Have you ever have these problem in your enviroment?
Hi @RSS_STT, at first, did you extracted all the fields? if yes, you have to use eval to create the new field applying the conditions you described: <your_search> | stats dc(TOOL_Status) AS TO... See more...
Hi @RSS_STT, at first, did you extracted all the fields? if yes, you have to use eval to create the new field applying the conditions you described: <your_search> | stats dc(TOOL_Status) AS TOOL_Status_count values(TOOL_Status) AS TOOL_Status values(description) AS description values(Host_Name) AS Host_Name BY Event_ID | eval new_status=if(TOOL_Status_count=2,1,0) | where TOOL_Status_count=2 OR TOOL_Status="OPEN" in this way you have all the Event_IDs with both the status or Status=OPEN. If your condition that the I supposed, you can change the search applying it following my logic. Ciao. Giuseppe
i have fields value in events something like below. TOOL_Status description Event_ID Host_Name CLOSED 21alerts has been issued abc 2143nobi11 abc CLOSED 21alerts has been issued abc 2143nobi11 abc... See more...
i have fields value in events something like below. TOOL_Status description Event_ID Host_Name CLOSED 21alerts has been issued abc 2143nobi11 abc CLOSED 21alerts has been issued abc 2143nobi11 abc OPEN 21alerts has been issued abc 2143nobi11 abc OPEN 21alerts has been issued 111 2143nobi12 111 CLOSED 21alerts has been issued 111 2143nobi12 111 CLOSED 21alerts has been issued xyz 2143nobi15 xyz CLOSED 21alerts has been issued xyz 2143nobi15 xyz CLOSED 21alerts has been issued xyz 2143nobi15 xyz If TOOL_Status=OPEN & TOOL_Status=CLOSED both exist for similar event ID than create new field new_status=1 and should be ignored if only TOOL_Status=CLOSED TOOL_Status exist for similar event ID .   
I am also getting the same error, did it get fix for you?
We are looking for API request which fetch the audit logs/events performed by users in various application
Hi all, I have created an search which returns set of email address and few hosts and using table command to display that. Result looks like below: Hostname Agent Version Email host1 ... See more...
Hi all, I have created an search which returns set of email address and few hosts and using table command to display that. Result looks like below: Hostname Agent Version Email host1 1.0 test1@gmail.com host2 2.0 test2@gmail.com host3 2.0 test1@gmail.com host4 2.0 test1@gmail.com   Now , I want to send separate emails to test1@gmail.com and test2@gmail.com. The email should only contain hosts belonging to them. i.e host1, host3, host4 and its agent version should go to test1@gmail.com and host2 should go to test2@gmail.com I want to embed a link in the alert email body that redirects to search result and should contain hostnames that belong to particular recepient. Can anyone help me how to generate dynamic alert link ? Regards, PNV
I have the following transforms.conf file: [pan_src_user] INGEST_EVAL=src_user_idx=json_extract(lookup("user_ip_mapping.csv",json_object("src_ip", src_ip),json_array(src_user_idx)),"src_user") and... See more...
I have the following transforms.conf file: [pan_src_user] INGEST_EVAL=src_user_idx=json_extract(lookup("user_ip_mapping.csv",json_object("src_ip", src_ip),json_array(src_user_idx)),"src_user") and props.conf file: [pan:traffic] TRANSFORMS-pan_user = pan_src_user user_ip_mapping.csv file sample: src_ip src_user 10.1.1.1 someuser   However it's not working - not sure what I'm doing wrong? The src_user_idx field is not showing up in any of the logs
Sure @richgalloway - waiting for developer license file as of now. Will try to upload the same once recieved.  Also there are no .lic file under the said location- $SPLUNK_HOME/etc/licenses/ Thanks