All Topics

Top

All Topics

Having problem creating a props configuration Seeing could not use striptime to parse timestamp. Below logs comes from Docker  ("log":"[20:52:02] [/home/a153509/.local/share/code-server/extensions... See more...
Having problem creating a props configuration Seeing could not use striptime to parse timestamp. Below logs comes from Docker  ("log":"[20:52:02] [/home/a153509/.local/share/code-server/extensions/ms-toolsai.jupyter-2022.9.1303220346]: Extension is not compatible with Code 1.66.2 . Extension requires: 1.72.0.\n","stream":"stderr","time":"2023-03-06T20:52:02.2194402152"}{"log":"[20:52:02] [ /home/a15 3509/.local/share/code-server/extensions/ms-python.vscode-pylance-2023. 1.10]: Extension is not compatible with Code 1.66.2. Extension req uires: 1.67.0.\n ", "stream":"stderr","time": "2023-03-06T20:52:02.219891147Z")("log": "[20:52:02] [\u003cunknown\u003e][80d9f7e6][ Extension HostConnection] New connection established.\n","stream":"stdout","time":"2023-03-06T20:52:02.604222684Z"){"log":"[20:52:02] [ \u003cunknow n\u003e][80d9f7e6][ExtensionHostConnection] \u003c1453\u003e Launched Extension Host Process. \n","stream":"stdout","time":"2023-03-06T20: 52:02.617643295Z"] ["log": "[IPC Library: Pty Host] INFO Persistent process "1": Replaying 505 chars and 1 size events\n","stream":"stdo ut", "time":"2023-03-06T20 :52:06.9 270320622"} ["log":"[IPC Library: Pty Host] WARN Shell integration cannot be enabled for executable \"/b in/bash and args undefined\n", "stream":"stdout","time": "2023-03-06T20:52:56.754368802Z"}{ log":"[20:57:00] [\u003cunknown\u003e][laf3f4 9a][ExtensionHostConnection] \u003c766\u003e Extension Host Process exited with code: 0 , signal: null.\n","stream"stdout", "time":"2023- 03-06T20:57:00 839578031Z"}"log" [02:12:50] [\u003cunknown\u003e][adf26d01 ][ManagementConnection] The client has disconnected, will wai t for reconnection 3h before disposing...\n","stream":"stdout, "time":"2023-03-07T02:12:50. 7892555182")("log":"[05:12:59] [\u003cunknown \u003e][adf26d01][ManagementConnection] The reconnection grace time of 3h has expired, so the connection will be disposed. \n", "stream ":"s tdout","time":"2023-03-07T05:12:59.567198587Z" log":[13:16:53] [\u003cunknown\u003e][adf26d01][ManagementConnection] Unknown reconnect ion token ( seen before) \n","stream":"stderr","time":"2023-03-07T13:16:53 2951627292")("log":"[13:16:53] [\u003cunknown\u003e ][80d9f7e6] [ExtensionHostConnection] The client has reconnected. \n","stream":"stdout", "time": "2023-03-07T13: 16:53.453120386Z")     Hers is my props.conf  auto learned  SHOULD LINEMERGE=false LINE BREAKER=([\n\r]+)\s*("log":"{\n  NO BINARY CHECK-true TIME PREFIX="time" MAX TIMESTAMP LOOKAHEAD=48 TIME FORMAT=%Y-%m-%dT%H:%M:%S.9N%z  TRUNCATE=999999 CHARSET=UTF-8 KV MODE=json  ANNOTATE POINT=false
Hello Community, I am having issues connecting my Universal Forwarder with a Heavy Forwarder. I have the following set up: UF-->HF-->IDx I can see the logs from HF to IDx, but not sure why I ca... See more...
Hello Community, I am having issues connecting my Universal Forwarder with a Heavy Forwarder. I have the following set up: UF-->HF-->IDx I can see the logs from HF to IDx, but not sure why I cannot see logs from UF-->HF The connection HF-->IDx is [splunktcp-ssl] whereas the connection UF-->HF is [tcpout] My question is how to troubleshoot the broken connection? I read the UF logs but still cannot the issue. Any help much appreciated. Thank you All!
Hi, I have a lookup table where column names are with weekdays (like monday, tuesday, wednesday,...) and have possible values as 1 and 0 only. What I want to achieve.. ...some query | eval day=... See more...
Hi, I have a lookup table where column names are with weekdays (like monday, tuesday, wednesday,...) and have possible values as 1 and 0 only. What I want to achieve.. ...some query | eval day=strftime(now(),"%A") | where 'day'=1 but this doesn't seems to be working. Any idea how to search dynamic fields.   Thanks
Hi team, I want to set email & slack alert when error code 405 will occur in NGINX access logs. Splunk should trigger when 405 error code will appear.
Hi All, we have newly installed ES cluster where we cannot see the any action populating in adaptive response. We tried installing ES on stand alone server it works fine. Below is the error we are ... See more...
Hi All, we have newly installed ES cluster where we cannot see the any action populating in adaptive response. We tried installing ES on stand alone server it works fine. Below is the error we are getting. Splunk version is 8.2 and ES 7.0.2. Thanks in advance  
Hi Guys, We have a Windows Controller and recently upgraded it  and have end to end SSL from the BigIP F5 to the Controller working.  When we log into the admin console (ie. admin.jsp)  and back int... See more...
Hi Guys, We have a Windows Controller and recently upgraded it  and have end to end SSL from the BigIP F5 to the Controller working.  When we log into the admin console (ie. admin.jsp)  and back into the normal Controller UI, the Tier "App Server" doesn't show the node.  Does anyone know the path location of the java agent is on the windows controller?     
Hi everyone, I'm trying to view the events from Azure AD MFA in Splunk Cloud.  Use the sign-ins report to review Azure AD Multi-Factor Authentication events - Azure Documentation https://learn.mic... See more...
Hi everyone, I'm trying to view the events from Azure AD MFA in Splunk Cloud.  Use the sign-ins report to review Azure AD Multi-Factor Authentication events - Azure Documentation https://learn.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-reporting We already have the Microsoft Office 365 App and Add-on in Splunk, but in the authentication logs we are not seeing anything related with MFA data. It is supposed to be in Splunk already? Am I missing something? Is there another way to collect that report into Splunk?  Thanks!
In Incident Review, one can create a filter and save it as a default.  Where does it store that configuration so I can push it across multiple ES instances?
I am working on a custom alert app to replace our old custom alert script action.  It was working fine but all of sudden I am no longer getting the --execute argument being passed and my script doesn... See more...
I am working on a custom alert app to replace our old custom alert script action.  It was working fine but all of sudden I am no longer getting the --execute argument being passed and my script doesn't work any more.   Here is the code:   if __name__ == "__main__": #clear logs now = datetime.now() dt_string = now.strftime("%d/%m/%Y %H:%M:%S") log(dt_string + ": Start Version 1.2","w") log("Checking to see if we have any arguments...") log("Number of arguments: " + str(len(sys.argv))) if len(sys.argv) > 1 and sys.argv[1] == "--execute": log("We have arguments.") try: payload = json.loads(sys.stdin.read()) result_file = payload['results_file'] #Pass the payload to main for processing.... main(payload) #End now = datetime.now() dt_string = now.strftime("%d/%m/%Y %H:%M:%S") log(dt_string + ": Processing complete.") except: log("We have an error on settings, exiting") sys.exit() else: log("There were no arguments. Exiting.") sys.exit()   Here is the output of my logging:   16/03/2023 10:55:16: Start Version 1.2 Checking to see if we have any arguments... Number of arguments: 1 There were no arguments. Exiting.     I have no idea what the --execute argument is and how it is passed, or what it actually means and can't find anything much about it  Hoping to shed some light here   thanks!  
Hi All, I have one universal forwarder which is reporting to DS and receiving internal logs, but i am not getting any data into index. Logs are present in the server. How to troubleshoot this ... See more...
Hi All, I have one universal forwarder which is reporting to DS and receiving internal logs, but i am not getting any data into index. Logs are present in the server. How to troubleshoot this kind of issues???
Hello all, I have three individual searches for a single value viz. the value for each viz is a sum of a field. I have bytes, bytes_in, and bytes_out. Each search is | stats sum(bytes) as Total, ... See more...
Hello all, I have three individual searches for a single value viz. the value for each viz is a sum of a field. I have bytes, bytes_in, and bytes_out. Each search is | stats sum(bytes) as Total, sum(bytes_in) as In, and sum(bytes_out) as Out So 3 searches for each field, and a single value viz for each field. I have looked at the trellis viz, but it is not much help. My actual spl is using the same formula for each field: index=squid | stats sum(bytes_in) as TotalBytes | eval gigabytes=TotalBytes/1024/1024/1024 | rename gigabytes as "Bytes In" | table "Bytes In" Is there some way to put all three stats commands in the same search, and maybe the trellis can get each calculation? I looked at trying to put  each single value in a table 3 column by one row, etc How can this be accomplished. Thanks again, eholz1
I cant seem to find a efficient way to do this.  I have text box where a user first and last name is entered and depending on the search the token will be used but the text box is "first last" and I ... See more...
I cant seem to find a efficient way to do this.  I have text box where a user first and last name is entered and depending on the search the token will be used but the text box is "first last" and I  need to transform it to  be either:   first.last OR first-last.   Please help as everything I have tried does not work. 
Currently on v 9.0.3 (but has been happening forever). On our universal forwarders we're using the Splunk provided bin apps for various things. In this example, I just noted the win_installed_apps.... See more...
Currently on v 9.0.3 (but has been happening forever). On our universal forwarders we're using the Splunk provided bin apps for various things. In this example, I just noted the win_installed_apps.bat  is running 78 times in a 24 hour period, even though the interval is set to once every 24 hours: [script://.\bin\win_installed_apps.bat] disabled = 0 ## Run once per day interval = 86400 sourcetype = Script:InstalledApps Other examples that are set for 86400 seconds include win_timesync_configuration.bat and win_timesync_status.bat that both run 39 times a day. We have a home grown script to check for compliance set to run every hour (3600 seconds) and it runs every hour like it should. Why are so many others ignored? Thoughts?
Hi, I am currently receiving an alert where the license consumption is exceeding 80%. I need to know which index is consuming more license in the last 30 days or last 7 days. This query shows the... See more...
Hi, I am currently receiving an alert where the license consumption is exceeding 80%. I need to know which index is consuming more license in the last 30 days or last 7 days. This query shows the total license consumption but I need to know which index or sourcetype is generating the most license consumption.   `sim_licensing_summary_base` | `sim_licensing_summary_no_split("")` | append [| search (index=summary source="splunk-entitlements") | bin _time span=1d | stats max(ingest_license) as license by _time] | stats values(*) as * by _time | rename license as "license limit" | fields - volume      
Reimagine what you can do with your dashboards. Dashboard Studio is Splunk’s newest dashboard builder to easily create visually-compelling, interactive dashboards with an intuitive UI. With Dashboard... See more...
Reimagine what you can do with your dashboards. Dashboard Studio is Splunk’s newest dashboard builder to easily create visually-compelling, interactive dashboards with an intuitive UI. With Dashboard Studio’s advanced visualization tools and fully customizable layouts, you can visualize more insights from your data and communicate powerful data stories to any audience.  This challenge is an opportunity to level up your dashboard skills, showcase your visualizations, and win a $100 gift card to the Splunk Store.  We supply the themes and the datasets. You create an impactful, story-telling dashboard in Dashboard Studio. It’s that easy!  This year, we have four dashboard themes: Security, IT, DevOps, and Business Insights. For each theme, 10 winners that meet the submission criteria will be randomly selected to receive a $100 Splunk Store gift card (40 winners total). Dashboard Use Case Examples Security : number of threats detected, threats location, issues reported, MTTR, issue urgency rating, and more.  DevOps : response time by app, errors by app, errors by host, service health location and more.  IT : server downtime, reported device issues, network issues, and more.  Business Insights : products sold, customer purchases by location, revenue, purchased product type, customer satisfaction score, employee happiness score, and more.  How to Enter the Challenge Register here to download the four datasets. Get a free trial of Splunk Cloud Platform (encouraged for access to the newest features). You can also download Splunk Enterprise or use your own Splunk environment.  Upload your dataset(s) and start designing! When you’re ready to submit, upload your dashboard to the final submission form and agree to legal acknowledgements by May 12th.  All submissions that meet the criteria will be entered into the $100 gift card raffle (Note: only US customers are eligible to win). Submission Criteria You can submit a maximum of ONE dashboard per theme Your dashboard must have at least 3 panels Your dashboard must have at least 1 search-based visualization Splunk’s Dashboard Picks: The Best of the Best! The cream of the crop will have the opportunity to be featured as examples in a “Splunk’s Dashboard Picks” blog, customer resources, and even our in-product Examples Hub to help inspire other members of the Splunk community! Tips to be Featured Tell a data story with your dashboard - take advantage of Studio’s visualization tools Make the dashboard accessible to any audience by using images and icons, descriptive labels, and logical formats Use our resource toolkit for a plethora of best practices and tips! Register here to enter the challenge and to get tips and tricks, expert advice, handy content, and submission timeline updates. You can check out our plethora of resources below, join our slack channel for discussion, and sign up for our office hours!! Getting Started  Community Office Hours: Dashboard Studio Challenge  Dashboard Studio Tutorial and demo Dashboard Studio Introduction Tech Talk Splunk Dashboard Studio Documentation Introduction to Dashboards e-learning Course Examples Hub - Find the in-product Examples Hub from the Dashboards page in Search & Reporting Questions or Suggestions?   User Slack Channel - #dashboard-studio (request access here) Splunk Ideas - Dashboard Studio for feature or enhancement Requests Splunk Community - Dashboards & Visualizations for questions Ramping Up Communicating Data Stories to Any Audience: Top 5 Dashboard Design Best Practices Guide Dynamic Dashboards e-learning Course Improving Dashboard Performance and Resource Usage Tech Talk Streamlined Dashboard Building: Productivity Tips and Tricks Tech Talk Level Up Your Dashboards with Interactivity .conf session Blogs Dashboard Design: Getting Started with Best Practices  Dashboard Design: Visualizations and Configurations  How We Built It: Getting Spooky with Splunk Dashboards
Hello, a search is retrieving following results order by event date Date                                      value 2023-03-02 22PM            10 2023-03-02 20PM             05 2023-03-02 17P... See more...
Hello, a search is retrieving following results order by event date Date                                      value 2023-03-02 22PM            10 2023-03-02 20PM             05 2023-03-02 17PM             25 2023-03-02 06AM             03 Considering  value, I'd like to calculate the % betwen those two (PrevLatest*Latest)/100 (5*10)/100=0.5 "50% " I'm new on this, any idea on how to achieve it?..this has to be used to raise an alert many many thanks
Hi, This is the log sent from Docker ("log":"[21:52:02] [/home/a143519/.local/share/code-server/extensions/ms-toolsai.jupyter-2021.9.1303320346]: Extension is not compatible with Code 1.66.2. Exten... See more...
Hi, This is the log sent from Docker ("log":"[21:52:02] [/home/a143519/.local/share/code-server/extensions/ms-toolsai.jupyter-2021.9.1303320346]: Extension is not compatible with Code 1.66.2. Extension requires: 1.72.0.\n","stream":"stderr","time":"2023-03-06T21:52:02.2194402152"}{"log":"[21:52:02] [/home/a15 3509/.local/share/code-server/extensions/ms-python.vscode-pylance-2023. 1.10]: Extension is not compatible with Code 1.66.2. Extension req uires: 1.67.0.\n", "stream":"stderr","time": "2023-03-06T21:52:02.219891147Z")("log": "[21:52:02] [\u009cunknown\u009e][80d9f7e6][Extension HostConnection] New connection established.\n","stream":"stdout","time":"2023-03-06T21:52:02.604222684Z"){"log":"[21:52:02] [\u009cunknow n\u009e][80d9f7e6][ExtensionHostConnection] \u003c1453\u009e Launched Extension Host Process. \n","stream":"stdout","time":"2023-03-06T21: 52:02.617643295Z"]["log": "[IPC Library: Pty Host] INFO Persistent process "1": Replaying 505 chars and 1 size events\n","stream":"stdo ut", "time":"2023-03-06T21:52:06.9270320622"} ["log":"[IPC Library: Pty Host] WARN Shell integration cannot be enabled for executable \"/b in/bash and args undefined\n", "stream":"stdout","time":"2023-03-06T21:52:56.754368802Z"}{ log":"[21:57:00] [\u009cunknown\u009e][laf3f4 9a][ExtensionHostConnection] \u007c766\u007e Extension Host Process exited with code: 0, signal: null.\n","stream"stdout", "time":"2023- 03-06T21:57:00 839878031Z"}"log" [02:12:50] [\u009cunknown\u009e][abc26d01][ManagementConnection] The client has disconnected, will wai t for reconnection 3h before disposing...\n","stream":"stdout, "time":"2023-03-07T04:12:50. 7892655182")("log":"[05:12:59] [\u007cunknown \u007e][abf26c01][ManagementConnection] The reconnection grace time of 3h has expired, so the connection will be disposed. \n", "stream":"s tdout","time":"2023-03-07T05:12:59.567198587Z" log":[13:16:53] [\u003cunknown\u003e][adf26d01][ManagementConnection] Unknown reconnect ion token (seen before) \n","stream":"stderr","time":"2023-03-07T13:17:53 2951627292")("log":"[14:16:53] [\u003cunknown\u003e][90d9f9e6] [ExtensionHostConnection] The client has reconnected. \n","stream":"stdout", "time": "2023-03-07T13: 16:53.453120386Z") Here is my props.conf :   auto learned SHOULD LINEMERGE=false LINE BREAKER=([\n\r]+)\s*("log":"{\n NO BINARY CHECK-true TIME PREFIX="time" MAX TIMESTAMP LOOKAHEAD=48 TIME FORMAT=%Y-%m-%dT%H:%M:%S.9N%z TRUNCATE=999999 CHARSET=UTF-8 KV MODE=json ANNOTATE POINT=false   I have tried many different props.conf. Configurations but no luck. Any help would be greatly appreciated!
  Can someone guide me in the right direction. I have an issue with src_ip extraction using the nix splunk TA. I see that the [syslog] stanza in props.conf contains the config below, but I'm unsur... See more...
  Can someone guide me in the right direction. I have an issue with src_ip extraction using the nix splunk TA. I see that the [syslog] stanza in props.conf contains the config below, but I'm unsure how src_ip is actually being extracted using the props and transforms code blocks below.   Futhermore, I'm not 100% certain what transforms is actually doing. I was trying to narrow down where the issue might be with the extraction, but having some difficultly figuring that out.  The regex seems very basic. search: `index=ap_os_nix sourcetype=syslog` sourcetype = `syslog` source = `/var/log/auth` This payload below parses incorrectly and also included the port number. Mar 16 11:36:43 apnmls02 sshd[21198]: Received disconnect from 172.16.5.49 port 51798:11: Session closed [preauth] `src_ip="172.16.5.49 port 51798:11"` The payload below has parses the source IP correctly. Mar 16 11:42:23 apcribl02 sshd[200646]: Connection closed by 172.16.5.49 port 56452 `src_ip = 172.16.5.49` ### Props for syslog sourcetype ``` ###### Syslog ###### [source::....syslog] sourcetype = syslog [syslog] EVENT_BREAKER_ENABLE = true ## Event extractions by type REPORT-0authentication_for_syslog = remote_login_failure, bad-su2, passwd-auth-failure, failed_login1, bad-su, failed-su, ssh-login-failed, ssh-invalid-user, ssh-login-accepted, ssh-session-close, ssh-disconnect, sshd_authentication_kerberos_success, sshd_authentication_refused, sshd_authentication_tried, sshd_login_restricted, pam_unix_authentication_success, pam_unix_authentication_failure, sudo_cannot_identify, ksu_authentication, ksu_authorization, su_simple, su_authentication, su_successful, wksh_authentication, login_authentication EVAL-action = if(app="su" AND isnull(action),"success",action) REPORT-account_management_for_syslog = useradd, userdel, userdel-grp, groupdel, groupadd, groupadd-suse REPORT-password_change_for_syslog = pam-passwd-ok, passwd-change-fail REPORT-firewall = ipfw, ipfw-stealth, ipfw-icmp, pf REPORT-routing = iptables EVAL-signature = if(isnotnull(inbound_interface),"firewall",null()) REPORT-signature_for_syslog_timesync = signature_for_nix_timesync REPORT-dest_for_syslog = host_as_dest LOOKUP-action_for_syslog = nix_action_lookup vendor_action OUTPUTNEW action REPORT-src_for_syslog = src_dns_as_src, src_ip_as_src FIELDALIAS-dvc = dest as dvc EVAL-vendor_product = if(isnull(vendor_product), "nix", vendor_product) ``` ### Transforms line referenced in Props ``` [src_ip_as_src] SOURCE_KEY = src_ip REGEX = (.+) FORMAT = src::"$1" ```      
CVE-2023-23397 is all the rage right now. Has anyone figured out a way to detect this in office content? I've checked all Microsoft docs I can find, but nothing informs me as to what I'm actually l... See more...
CVE-2023-23397 is all the rage right now. Has anyone figured out a way to detect this in office content? I've checked all Microsoft docs I can find, but nothing informs me as to what I'm actually looking for inside an email or contact etc.
Hello, I have data collected through a Splunk HEC on a Heavy Forwarder. The data has this structure: 2023-03-16T16:59:01+01:00 serverIP event_info [data1][datat2] {json_data}. I want to get t... See more...
Hello, I have data collected through a Splunk HEC on a Heavy Forwarder. The data has this structure: 2023-03-16T16:59:01+01:00 serverIP event_info [data1][datat2] {json_data}. I want to get the json_data indexed as raw data. I have tried several regex with SEDCMD. I have tried several regex that are all working on a standalone Splunk but they have no effect with the configuration Splunk HF->Splunk IDX Here is my latest SEDCMD: SEDCMD-json=s/^[^{]+//g Currently there is no TA on the Splunk indexer and I am wondering if this is the cause of the issue. Is SEDCMD compatible with HEC ?  Regards