All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a field that's called file_content on an source type. This has a CSV inside. Meaning every event has a field (file_content) that has a csv inside it. Every event is an email Can't be field e... See more...
I have a field that's called file_content on an source type. This has a CSV inside. Meaning every event has a field (file_content) that has a csv inside it. Every event is an email Can't be field extraction as the "file_content" is really hard to find inside the data. I used the regex query to extract the data, It's slow as I get 1 CSV per hour every day. so i wonder if there is any automation or a better way to do this? My regex folows:     | rex field=file_content "(?P<ContactId>[^\s,]*),(?P<Customernumber>[^\s,]*),(?P<AfterContactWorkDuration>[^\s,]*),(?P<AfterContactWorkEndTimestamp>[^,]*),(?P<AfterContactWorkStartTimestamp>[^,]*),(?P<AgentInteractionDuration>[^\s,]*),(?P<ConnectedToAgentTimestamp>[^,]*),(?P<CustomerHoldDuration>[^\s,]*),(?P<Hierarchygroups_Level1_GroupName>[^\s,]*),(?P<Hierarchygroups_Level2_GroupName>[^\s,]*),(?P<Hierarchygroups_Level3_GroupName>[^\s,]*),(?P<LongestHoldDuration>[^\s,]*),(?P<NumberOfHolds>[^\s,]*),(?P<Routingprofile>[^\s,]*),(?P<Agent>[^\s,]*),(?P<AgentConnectionAttempts>[^\s,]*),(?P<ConnectedToSystemTimestamp>[^,]*),(?P<DisconnectTimestamp>[^,]*),(?P<InitiationMethod>[^\s,]*),(?P<InitiationTimestamp>[^,]*),(?P<LastUpdateTimestamp>[^,]*),(?P<NextContactId>[^\s,]*),(?P<PreviousContactId>[^\s,]*),(?P<DequeueTimestamp>[^,]*),(?P<Duration>[^\s,]*),(?P<EnqueueTimestamp>[^,]*),(?P<Name>[^\s,]*),(?P<TransferCompletedTimestamp>[^,]*),(?P<HandleTime>[^\s,]*),(?P<TicketNumber>((\"[^\"]*\")+|[^\s,]*)),(?P<Account>[^\s,]*),(?P<AccountName>[^\s,]*),(?P<Country>[^\s,]*),(?P<Language>[^\s,]*),(?P<Site>[^\s,]*),(?P<WrapCode>[^\s,]*)"     And here is an example of how the data should look like (in csv):     ContactId,Customernumber,AfterContactWorkDuration,AfterContactWorkEndTimestamp,AfterContactWorkStartTimestamp,AgentInteractionDuration,ConnectedToAgentTimestamp,CustomerHoldDuration,Hierarchygroups_Level1_GroupName,Hierarchygroups_Level2_GroupName,Hierarchygroups_Level3_GroupName,LongestHoldDuration,NumberOfHolds,Routingprofile,Agent,AgentConnectionAttempts,ConnectedToSystemTimestamp,DisconnectTimestamp,InitiationMethod,InitiationTimestamp,LastUpdateTimestamp,NextContactId,PreviousContactId,DequeueTimestamp,Duration,EnqueueTimestamp,Name,TransferCompletedTimestamp,HandleTime,TicketNumber,Account,AccountName,Country,Language,Site,WrapCode aaaa-xxxxxx,123456789,90,29/06/2021 01:00,29/06/2021 01:00,111,29/06/2021 01:00,0,country1,xx,yy,90,90,language,dummy,1,29/06/2021 01:00,29/06/2021 01:00,type_x,29/06/2021 01:00,29/06/2021 01:00,,,29/06/2021 01:00,11,29/06/2021 01:00,type_y,29/06/2021 01:00,201,A123,xxx,xxx,country_y,language,type_w,xxxx bbbb-xxxxxx,987654321,90,29/06/2021 01:00,29/06/2021 01:00,111+P4,29/06/2021 01:00,0,country1,xx,yy,90,90,language,dummy,1,29/06/2021 01:00,29/06/2021 01:00,type_x,29/06/2021 01:00,29/06/2021 01:00,,,29/06/2021 01:00,11,29/06/2021 01:00,type_y,29/06/2021 01:00,201,"""A123,B123""",xxx,xxx,country_y,language,type_w,xxxx       For example you can run a report everyday and save the outcome to an lookup , but this wouldn't work as it would be too much data for a lookup , I looked around and I found some people talk about summary index , do you think this would be a good option for me ?
Hi,  Our app was built from Splunk add-on builder 3.0.1 which is supposed to support both python2 and python3. We use python3 in configure files and passed cloud vetting.  My question is when Splun... See more...
Hi,  Our app was built from Splunk add-on builder 3.0.1 which is supposed to support both python2 and python3. We use python3 in configure files and passed cloud vetting.  My question is when Splunk formally drop python2 at this fall, would our app still safe and staying on Splunkbase? I did a scan with your Python upgrade readiness app but it flagged me failure. Even I deleted the aob_py2 folder add-on builder generated, still failed.    Has Add-on builder itself passed this "python Upgrade Readiness App" test? What apps like us, built from add-on builder need to do to comply the python upgrade readiness requirement? Will Add-on builder release a new version after dropping python2 entirely and we need rebuild our apps upon the new builder version?   Please let me know,    Thanks!   Lixin
Why do I use "tstats" and "stats" but return different results??? I need an explanation. I use Splunk version 8.2.0
How can I make table in dashboard responsive and scrollable both in html and exported pdf such that all of the table columns should be visible and adjusted in a single page in exported pdf. 
Hi , I have some alerts which i want to change as report . the reason is , if there are no events then alert is not sending any data/email where in case of report we can receive atleast one blank cs... See more...
Hi , I have some alerts which i want to change as report . the reason is , if there are no events then alert is not sending any data/email where in case of report we can receive atleast one blank csv attacehed report/email  if there is no data .. so as per business requirement we want to change allalert to report .. how can we do that ?  
Hi all, I am really struggling with subtracting  two dates from each other. It sounds that easy but drives me literally crazy. All I want is, to subtract now() from a calculated date field.     ... See more...
Hi all, I am really struggling with subtracting  two dates from each other. It sounds that easy but drives me literally crazy. All I want is, to subtract now() from a calculated date field.     | eval temp = relative_time(a, b) | eval newdate = temp - now()     temp has a value of "1625634900.000000" newdate will always be 01.01.1970. What am I missing? Thanks in advance!
I can query my Splunk instance using CLI with the following command:   /opt/splunk/bin/./splunk search 'index=* host=* mitre_technique!=- | stats count BY mitre_technique | fields - count' -auth us... See more...
I can query my Splunk instance using CLI with the following command:   /opt/splunk/bin/./splunk search 'index=* host=* mitre_technique!=- | stats count BY mitre_technique | fields - count' -auth user:password -app 'custom_app' -preview true   It returns results: mitre_technique --------------------------------------------------- T1003 - /etc/passwd and /etc/shadow T1007 - System Service Discovery T1011 - Exfiltration over Bluetooth T1016 - Internet Connection Discovery T1018 - Remote System Discovery T1025 - Data from Removable Media T1033 - System Owner/User Discovery ...   However, if I run it from within a python script:   print(subprocess.Popen(["/"+postpath+"splunk/bin/./splunk", "search", "'index=*", "host=*", "mitre_technique!=-", "|", "stats", "count", "BY", "mitre_technique", "|", "fields", "-", "count'", "-auth", splunkuser.strip()+":"+splunkpswd.strip(), "-app", "'custom_app'", "-preview", "true"]).communicate())    It returns: "Action forbidden." (None, None)   Does anyone know why this is? How can I get results returned from running the command in my python script? Thank you in advance
Hi All, We configured logs of a nutanix cluster to be pushed to splunk.  Inside splunk, I can see logs that shows that [An unsuccessful login attempt was made with username: xxx]  How can I churn ... See more...
Hi All, We configured logs of a nutanix cluster to be pushed to splunk.  Inside splunk, I can see logs that shows that [An unsuccessful login attempt was made with username: xxx]  How can I churn this out to a report. I am kind of lost where on how to start.   Can someone please explain or guide me along? Thank You Regards, Alex        
Hello everyone!  I need some help with figuring out how to make this base search the best way without hitting the 500.000 limit aswell. <search id="base_search"> <query>index=Test | fields ... See more...
Hello everyone!  I need some help with figuring out how to make this base search the best way without hitting the 500.000 limit aswell. <search id="base_search"> <query>index=Test | fields orders status countryCode <earliest>-60m@m</earliest> <latest>now</latest> </search>   Since i need it to be used with these different searches: <search base="base_search"> <query>search | timechart span=1d count(orders) by status</query> </search> <search base="base_search"> <query>search | timechart span=30m count(orders) by status</query> </search> <search base="base_search"> <query>search status=!"Cancelled" | timechart span=1d count(orders) by status</query> </search> <search base="base_search"> <query>search countryCode="SWE" | timechart span=1d count(orders) by status</query> </search>   Or am I missing something simple? I know base searches needs to be transformative to not hit the cap but how would I do that without making it unable to use the search command for the different things I need later? Like for specific countries etc.?  
Hi guys Im pretty new to Splunk and do not know how to create the search I need. We are forwarding events from our Faul Monitoring toward Splunk. there are three Type of event having the same field... See more...
Hi guys Im pretty new to Splunk and do not know how to create the search I need. We are forwarding events from our Faul Monitoring toward Splunk. there are three Type of event having the same fields. The three types are SET, UPDATE and CLEAR. So if an alarm is raised SET is the first event in Splunk afterward if more fields are filled in the monitoring UPDATE events are coming to splunk. Depending on the fault several (n) update events can be there. If the fault is closed the CLEAR event is received by Splunk. At some point the field "TTID" in an update event contains a TTID but all following UPDATES will contain it as well. What I try to achiev is searching for SET AND UPDATE and calculate the duration between the SET and the first UPDATE which is containing *INC* in the TTID field I've created a search to get duration between SET and CLEAR but as UPDATE can be there n-times I do not know how to really get the time between the first UPDATE containing *INC* in the TTID field Thanks a lot for your help
Hi The Fortinet Fortigate App for Splunk not working and Dashboards are empty. I have installed both the app including TA. And also below is the my data inputs type UDP:514 Sourcetype:fgt_log if... See more...
Hi The Fortinet Fortigate App for Splunk not working and Dashboards are empty. I have installed both the app including TA. And also below is the my data inputs type UDP:514 Sourcetype:fgt_log if any one know the setup or configuration changes please suggest what are the changes need to be do ? Thanks        
hi everyone i want to install splunk db connect 1.2.2 in splunk 7.3.9 i want to know this is ok thank you if version web page have a nice day
Hi everyone,  We are currently looking a config file(s) that consist of the details below, instead of running executables. Running the executables require privileged access, which we are trying to l... See more...
Hi everyone,  We are currently looking a config file(s) that consist of the details below, instead of running executables. Running the executables require privileged access, which we are trying to limit. Are there any config files that we can check that has the outputs below.  If yes, please share the relevant config files details. #./splunk display listen #./splunk show deploy-poll #./splunk list forward-server #./splunk show servername #./splunk show default-hostname # /opt/splunkforwarder/bin/splunk list monitor # /opt/splunkforwarder/bin/splunk version   Thank you. Cheers. Nestaz
I am, going through Lab Module 12 – Creating Lookups and I have downloaded the  products.csv file and trying to save it  following the steps but I get this error message  when I press the save button... See more...
I am, going through Lab Module 12 – Creating Lookups and I have downloaded the  products.csv file and trying to save it  following the steps but I get this error message  when I press the save button.  Encountered the following error while trying to save: File is binary or file encoding is not supported, only utf-8 encoded files are supported
Hi All, Have seen a few options for the issue I have, but wanted to know if Splunk can handle applying multiple props.conf settings to a specific feed. ISSUE:  We have a sourcetype coming in that ... See more...
Hi All, Have seen a few options for the issue I have, but wanted to know if Splunk can handle applying multiple props.conf settings to a specific feed. ISSUE:  We have a sourcetype coming in that needs to have date format set - this part is easy.  Complication is that feed is coming from servers in different timezones as well. PROPOSED SOLUTION:  Wondering if I can have props stanza for sourcetype to configure date format but also a secondary stanza for hosts to define timezone. Something like: [broken_sourcetype] MAX_TIMESTAMP_LOOKAHEAD = 27 TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N TIME_PREFIX = ^ [host::UShost*] TZ = America/New_York   [host::AUShost*] TZ = Australia/Sydney
Hi guys,  I am new to splunk and would like to create a report based off the number of times a particular windows event code is showing up. My search String - index="windows-servers" sourcetype =... See more...
Hi guys,  I am new to splunk and would like to create a report based off the number of times a particular windows event code is showing up. My search String - index="windows-servers" sourcetype = WinEventLog:Security EventCode=4625 | table ComputerName , EventCode, Message The above gives me what I want but I would like to streamline it further so that it shows up in a csv file. The table I have in mind is something like below. Can someone please guide me or point me in the right direction, please? Thank you so much Hostname EventCode Number of Times showing up Action Message User Account                                                              
Am looking for top 5-10 Splunk Apps / TAs to help with daily security checks & Watching for UBA behaviors, Ransomware monitoring etc. Thank u in advance
Are TAs always hidden? How do I hide an app from popular view so it is only viewable by Admins?
Hello;  I understand joins are expensive in Splunk. When I have a query that has two joins, which query executes first? Asking because I want the most limiting subsearch to execute, followed by a ... See more...
Hello;  I understand joins are expensive in Splunk. When I have a query that has two joins, which query executes first? Asking because I want the most limiting subsearch to execute, followed by a second subsearch, then the main search.  Ex.  index=app host=foo Dataquery1 | join type=inner  [ search index=app host=foo | table Dataquery2.Result1] | join type=inner  [search index=app host=foo | table Dataquery2.Result1, Dataquery3.Result1] |  table Dataquery1.Result, Dataquery2.Result1, Dataquery3.Result1
Hi,    I have a HEC input on an indexer.  I am trying to send Palo Alto Traffic Logs over HEC I have the this stanza in the props.conf  [source::hec] pulldown_type = true SHOULD_LINEMERGE = ... See more...
Hi,    I have a HEC input on an indexer.  I am trying to send Palo Alto Traffic Logs over HEC I have the this stanza in the props.conf  [source::hec] pulldown_type = true SHOULD_LINEMERGE = false TIME_PREFIX = ^(?:[^,]*,){5} MAX_TIMESTAMP_LOOKAHEAD = 100 #TRANSFORMS-sourcetype =  pan_traffic REPORT-trafic_fields = pan_trafic_fields     and this in transforms.conf  [pan_trafic_fields] DELIMS = "," FIELDS = "receive_time","serial_number","log_type","log_subtype","src_ip","dest_ip","rule","app","vsys","src_zone","dest_zone","src_interface","dest_interface","log_forwarding_profile","session_id","repeat_count","src_port","dest_port","transport","action","bytes","bytes_out","bytes_in","packets","start_time","duration","http_category","sequence_number","src_location","dest_location","packets_out","packets_in","session_end_reason","dvc_name","action_source","tunnel_id"   I am trying to test it with curl curl -k "https://172.31.72.93:8088/services/collector/raw?cca3-f29f63e09fdc&sourcetype=pan:log" -H "Authorization: Splunk 92a1a276-eee8-XXXX-XXXX-11d002640ad0" -d '"2021/07/05 12:30:06",44A1B3FC68F5304,TRAFFIC,end,103.125.191.136,10.0.0.10,splunk,incomplete,vsys1,untrusted,trusted,ethernet1/3,ethernet1/2,log-forwarding-default,574277,1,52564,8088,tcp,allow,74,74,0,1,"2021/07/05 12:30:06",0,any,730218,"United States",10.0.0.0-10.255.255.255,1,0,aged-out,PA-VM,from-policy,0' the Sourcetype is being recognised by Splunk as pan:traffic as expected but the parsing is not working on the indexers and no fields are showing in my search  am i missing something here