All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I cant seem to find a efficient way to do this.  I have text box where a user first and last name is entered and depending on the search the token will be used but the text box is "first last" and I ... See more...
I cant seem to find a efficient way to do this.  I have text box where a user first and last name is entered and depending on the search the token will be used but the text box is "first last" and I  need to transform it to  be either:   first.last OR first-last.   Please help as everything I have tried does not work. 
Currently on v 9.0.3 (but has been happening forever). On our universal forwarders we're using the Splunk provided bin apps for various things. In this example, I just noted the win_installed_apps.... See more...
Currently on v 9.0.3 (but has been happening forever). On our universal forwarders we're using the Splunk provided bin apps for various things. In this example, I just noted the win_installed_apps.bat  is running 78 times in a 24 hour period, even though the interval is set to once every 24 hours: [script://.\bin\win_installed_apps.bat] disabled = 0 ## Run once per day interval = 86400 sourcetype = Script:InstalledApps Other examples that are set for 86400 seconds include win_timesync_configuration.bat and win_timesync_status.bat that both run 39 times a day. We have a home grown script to check for compliance set to run every hour (3600 seconds) and it runs every hour like it should. Why are so many others ignored? Thoughts?
Hi, I am currently receiving an alert where the license consumption is exceeding 80%. I need to know which index is consuming more license in the last 30 days or last 7 days. This query shows the... See more...
Hi, I am currently receiving an alert where the license consumption is exceeding 80%. I need to know which index is consuming more license in the last 30 days or last 7 days. This query shows the total license consumption but I need to know which index or sourcetype is generating the most license consumption.   `sim_licensing_summary_base` | `sim_licensing_summary_no_split("")` | append [| search (index=summary source="splunk-entitlements") | bin _time span=1d | stats max(ingest_license) as license by _time] | stats values(*) as * by _time | rename license as "license limit" | fields - volume      
Hello, a search is retrieving following results order by event date Date                                      value 2023-03-02 22PM            10 2023-03-02 20PM             05 2023-03-02 17P... See more...
Hello, a search is retrieving following results order by event date Date                                      value 2023-03-02 22PM            10 2023-03-02 20PM             05 2023-03-02 17PM             25 2023-03-02 06AM             03 Considering  value, I'd like to calculate the % betwen those two (PrevLatest*Latest)/100 (5*10)/100=0.5 "50% " I'm new on this, any idea on how to achieve it?..this has to be used to raise an alert many many thanks
Hi, This is the log sent from Docker ("log":"[21:52:02] [/home/a143519/.local/share/code-server/extensions/ms-toolsai.jupyter-2021.9.1303320346]: Extension is not compatible with Code 1.66.2. Exten... See more...
Hi, This is the log sent from Docker ("log":"[21:52:02] [/home/a143519/.local/share/code-server/extensions/ms-toolsai.jupyter-2021.9.1303320346]: Extension is not compatible with Code 1.66.2. Extension requires: 1.72.0.\n","stream":"stderr","time":"2023-03-06T21:52:02.2194402152"}{"log":"[21:52:02] [/home/a15 3509/.local/share/code-server/extensions/ms-python.vscode-pylance-2023. 1.10]: Extension is not compatible with Code 1.66.2. Extension req uires: 1.67.0.\n", "stream":"stderr","time": "2023-03-06T21:52:02.219891147Z")("log": "[21:52:02] [\u009cunknown\u009e][80d9f7e6][Extension HostConnection] New connection established.\n","stream":"stdout","time":"2023-03-06T21:52:02.604222684Z"){"log":"[21:52:02] [\u009cunknow n\u009e][80d9f7e6][ExtensionHostConnection] \u003c1453\u009e Launched Extension Host Process. \n","stream":"stdout","time":"2023-03-06T21: 52:02.617643295Z"]["log": "[IPC Library: Pty Host] INFO Persistent process "1": Replaying 505 chars and 1 size events\n","stream":"stdo ut", "time":"2023-03-06T21:52:06.9270320622"} ["log":"[IPC Library: Pty Host] WARN Shell integration cannot be enabled for executable \"/b in/bash and args undefined\n", "stream":"stdout","time":"2023-03-06T21:52:56.754368802Z"}{ log":"[21:57:00] [\u009cunknown\u009e][laf3f4 9a][ExtensionHostConnection] \u007c766\u007e Extension Host Process exited with code: 0, signal: null.\n","stream"stdout", "time":"2023- 03-06T21:57:00 839878031Z"}"log" [02:12:50] [\u009cunknown\u009e][abc26d01][ManagementConnection] The client has disconnected, will wai t for reconnection 3h before disposing...\n","stream":"stdout, "time":"2023-03-07T04:12:50. 7892655182")("log":"[05:12:59] [\u007cunknown \u007e][abf26c01][ManagementConnection] The reconnection grace time of 3h has expired, so the connection will be disposed. \n", "stream":"s tdout","time":"2023-03-07T05:12:59.567198587Z" log":[13:16:53] [\u003cunknown\u003e][adf26d01][ManagementConnection] Unknown reconnect ion token (seen before) \n","stream":"stderr","time":"2023-03-07T13:17:53 2951627292")("log":"[14:16:53] [\u003cunknown\u003e][90d9f9e6] [ExtensionHostConnection] The client has reconnected. \n","stream":"stdout", "time": "2023-03-07T13: 16:53.453120386Z") Here is my props.conf :   auto learned SHOULD LINEMERGE=false LINE BREAKER=([\n\r]+)\s*("log":"{\n NO BINARY CHECK-true TIME PREFIX="time" MAX TIMESTAMP LOOKAHEAD=48 TIME FORMAT=%Y-%m-%dT%H:%M:%S.9N%z TRUNCATE=999999 CHARSET=UTF-8 KV MODE=json ANNOTATE POINT=false   I have tried many different props.conf. Configurations but no luck. Any help would be greatly appreciated!
  Can someone guide me in the right direction. I have an issue with src_ip extraction using the nix splunk TA. I see that the [syslog] stanza in props.conf contains the config below, but I'm unsur... See more...
  Can someone guide me in the right direction. I have an issue with src_ip extraction using the nix splunk TA. I see that the [syslog] stanza in props.conf contains the config below, but I'm unsure how src_ip is actually being extracted using the props and transforms code blocks below.   Futhermore, I'm not 100% certain what transforms is actually doing. I was trying to narrow down where the issue might be with the extraction, but having some difficultly figuring that out.  The regex seems very basic. search: `index=ap_os_nix sourcetype=syslog` sourcetype = `syslog` source = `/var/log/auth` This payload below parses incorrectly and also included the port number. Mar 16 11:36:43 apnmls02 sshd[21198]: Received disconnect from 172.16.5.49 port 51798:11: Session closed [preauth] `src_ip="172.16.5.49 port 51798:11"` The payload below has parses the source IP correctly. Mar 16 11:42:23 apcribl02 sshd[200646]: Connection closed by 172.16.5.49 port 56452 `src_ip = 172.16.5.49` ### Props for syslog sourcetype ``` ###### Syslog ###### [source::....syslog] sourcetype = syslog [syslog] EVENT_BREAKER_ENABLE = true ## Event extractions by type REPORT-0authentication_for_syslog = remote_login_failure, bad-su2, passwd-auth-failure, failed_login1, bad-su, failed-su, ssh-login-failed, ssh-invalid-user, ssh-login-accepted, ssh-session-close, ssh-disconnect, sshd_authentication_kerberos_success, sshd_authentication_refused, sshd_authentication_tried, sshd_login_restricted, pam_unix_authentication_success, pam_unix_authentication_failure, sudo_cannot_identify, ksu_authentication, ksu_authorization, su_simple, su_authentication, su_successful, wksh_authentication, login_authentication EVAL-action = if(app="su" AND isnull(action),"success",action) REPORT-account_management_for_syslog = useradd, userdel, userdel-grp, groupdel, groupadd, groupadd-suse REPORT-password_change_for_syslog = pam-passwd-ok, passwd-change-fail REPORT-firewall = ipfw, ipfw-stealth, ipfw-icmp, pf REPORT-routing = iptables EVAL-signature = if(isnotnull(inbound_interface),"firewall",null()) REPORT-signature_for_syslog_timesync = signature_for_nix_timesync REPORT-dest_for_syslog = host_as_dest LOOKUP-action_for_syslog = nix_action_lookup vendor_action OUTPUTNEW action REPORT-src_for_syslog = src_dns_as_src, src_ip_as_src FIELDALIAS-dvc = dest as dvc EVAL-vendor_product = if(isnull(vendor_product), "nix", vendor_product) ``` ### Transforms line referenced in Props ``` [src_ip_as_src] SOURCE_KEY = src_ip REGEX = (.+) FORMAT = src::"$1" ```      
CVE-2023-23397 is all the rage right now. Has anyone figured out a way to detect this in office content? I've checked all Microsoft docs I can find, but nothing informs me as to what I'm actually l... See more...
CVE-2023-23397 is all the rage right now. Has anyone figured out a way to detect this in office content? I've checked all Microsoft docs I can find, but nothing informs me as to what I'm actually looking for inside an email or contact etc.
Hello, I have data collected through a Splunk HEC on a Heavy Forwarder. The data has this structure: 2023-03-16T16:59:01+01:00 serverIP event_info [data1][datat2] {json_data}. I want to get t... See more...
Hello, I have data collected through a Splunk HEC on a Heavy Forwarder. The data has this structure: 2023-03-16T16:59:01+01:00 serverIP event_info [data1][datat2] {json_data}. I want to get the json_data indexed as raw data. I have tried several regex with SEDCMD. I have tried several regex that are all working on a standalone Splunk but they have no effect with the configuration Splunk HF->Splunk IDX Here is my latest SEDCMD: SEDCMD-json=s/^[^{]+//g Currently there is no TA on the Splunk indexer and I am wondering if this is the cause of the issue. Is SEDCMD compatible with HEC ?  Regards
I have a lookup of vulnerability scan data that includes fields such as hostname, IP, OS, CVEs, etc. I would like to put all OSs that are specified as a desktop OS as a field value  named Desktop; an... See more...
I have a lookup of vulnerability scan data that includes fields such as hostname, IP, OS, CVEs, etc. I would like to put all OSs that are specified as a desktop OS as a field value  named Desktop; anything that is specified as a server OS as a field value named Server but add an extra layer of specification if it's Unix or Windows; and anything with a network OS specified as Network and then put those field values in a new field called OS_Specified  Here is an example of the OS's I would like to categorize.  Desktop Windows 10 Enterprise 64 bit Edition Version 1803 Windows 10 Enterprise 64 bit Edition Version 21H1 Windows 10 Server Red Hat Enterprise Linux 8.7 Windows Server 2012 R2 Datacenter 64 bit Edition Windows Server 2016 Datacenter Version 1607 Network Cisco Nexus Switch CentOS Linux 8.4.2105 I'm assuming eval and/or rex is going to need to be involved, and that is where I would need assistance.   I feel like my ask is similar to This  but a little more involved. 
The closest document I could find to an Operating System to Universal Forwarder version compatibility is the download site (link below), is there another link that can be used? https://www.splunk.c... See more...
The closest document I could find to an Operating System to Universal Forwarder version compatibility is the download site (link below), is there another link that can be used? https://www.splunk.com/en_us/download/previous-releases-universal-forwarder.html
I am trying to expand multiple fields from specific log lines using mvexpand but for some strange reason some fields are not extracted as expected, see screenshot for an example: I would also ... See more...
I am trying to expand multiple fields from specific log lines using mvexpand but for some strange reason some fields are not extracted as expected, see screenshot for an example: I would also like to have the key/value pairs for col and gantry.  
Hello guys, I will try to describe my problem as good as i can. I want to get some raport/alert when a new exception appears but that never happened before. let say that i have 15 exceptions that... See more...
Hello guys, I will try to describe my problem as good as i can. I want to get some raport/alert when a new exception appears but that never happened before. let say that i have 15 exceptions that happened before like: java.lang.NullPointer, java.lang.IllegalStateException.. etc. i want to get an alert when a “new” exception appear that never appeared before. Is that possible?
We have a simpleXML dashboard that we want to get visible for Splunk Mobile use. The problem is that this dashboard contains a panel with a visualization that isnt Mobile compatible. We dont want... See more...
We have a simpleXML dashboard that we want to get visible for Splunk Mobile use. The problem is that this dashboard contains a panel with a visualization that isnt Mobile compatible. We dont want to clone this dashboard and remove this single panel just for Mobile use, because this creates a maintenance burden where we need to keep track of the duplicated Mobile dashboards and replicate the changes of the main dashboards over to the Mobile ones. Is there anyway we can "hide" specific panels for the Mobile view only?
Hello everyone I am running into an issue that may be either Splunk or my Kiwi Syslog server, and I am not really sure and the research I am doing is not helping currently. We had a network device... See more...
Hello everyone I am running into an issue that may be either Splunk or my Kiwi Syslog server, and I am not really sure and the research I am doing is not helping currently. We had a network device that was not communicating and sending logs to syslog server but we fixed that and now whenever we view the RAW logs on the server we can see the specific %Port_Security logs that we are trying to have reported directly to splunk. Whenever I run a search query (that worked before a baseline change) I return 0 results. So what I did was change the way I am trying to retrieve these logs so I run a "sourcetype=syslog" host={switch-name}. The switch pops up and contains a number of logs. However, it seems that the most important log that we want (%Port_Security) does not return as a finding. After, running this search I figured there was maybe a problem with the sourcetype so I ran a search that targets the live syslog data with - source={log location} host={switch-name}. The system pops up again. I did not find the port security report inside this search either. I even added a (%Port_Security) on the back end of it.  I reached out to our engineers that provided the tool to us to help fix the issue since they are the ones that provide it and do the back end configuration and troubleshooting but they refuse to help. 
Hi! My request take much time to generate the result, how can i accelerate it | mpreview index=ciusss_vitals_linux_metric | stats latest(_time) as latest1 by host | eval recent = if(latest1 > r... See more...
Hi! My request take much time to generate the result, how can i accelerate it | mpreview index=ciusss_vitals_linux_metric | stats latest(_time) as latest1 by host | eval recent = if(latest1 > relative_time(now(),"-5m"),1,0), realLatest = strftime(latest1,"%c") | search recent=0 | stats values(host) as host | mvexpand host | map search="| ping host=$host$" maxsearches=200
Hi team, I am getting below error for custom command .   "Error in 'prtglivedata' command: External search command exited unexpectedly with non-zero error code 1." Can someone help . Below are my... See more...
Hi team, I am getting below error for custom command .   "Error in 'prtglivedata' command: External search command exited unexpectedly with non-zero error code 1." Can someone help . Below are my default and local .conf file . Default . [prtglivedata] filename = prtglivedata.py chunked = true enableheader = true outputheader = true requires_srinfo = true supports_getinfo = true supports_multivalues = true supports_rawargs = true   Local conf file  [prtglivedata] filename = prtglivedata.py chunked = true enableheader = true outputheader = true requires_srinfo = true supports_getinfo = true supports_multivalues = true supports_rawargs = true
When I tried to install splunk UF in my client machine its throwing an error for the packagd which I am using from splunk official website
Hello, I'm struggling with a task and would like to ask for your opinion about it. Goal is to set up an alert, which would fire an event in case the last 24h results differ from the one from 24h-48h ... See more...
Hello, I'm struggling with a task and would like to ask for your opinion about it. Goal is to set up an alert, which would fire an event in case the last 24h results differ from the one from 24h-48h before, and to also show the difference. I was trying to have something like:       | set diff [search message="Connected to system:*" earliest=-24h | rex field=connectedSystemName message="Connected to system: (?<systemName>.+)" | stats values(connectedSystemName) as system_names] [search message="Connected to system:*" earliest=-48h latest=-24h | rex field=connectedSystemName message="Connected to system: (?<systemName>.+)" | stats values(connectedSystemName) as system_names]       My results from this search is one coloumn of a list of the names of the connected systems. How could I reach such comparisment to also show the differences? Thanks a lot in advance! Peter
How to update splunk protocols for Splunk servers and ports in an Splunk Enterprise environment. Servers      Ports A                    8089 B                   8089   Only following proto... See more...
How to update splunk protocols for Splunk servers and ports in an Splunk Enterprise environment. Servers      Ports A                    8089 B                   8089   Only following protocols to be updated TLSv1.3: 0x13,0x01 TLS_ 0x13,0x02 TLS_ 0x13,0x03 TLS_ TLSv1.2: 0xC0,0x2B ECDHE- 0xC0,0x2F ECDHE-
I created an enhanced timeline that works the way I want but I'm wondering if there is a way to highlight or change the color of the block for certain events. The ones I want to highlight begin with ... See more...
I created an enhanced timeline that works the way I want but I'm wondering if there is a way to highlight or change the color of the block for certain events. The ones I want to highlight begin with a * so they are easy to identify. Is there anything I can do in the search? I'm displaying the graphic on a classic dashboard, is there something I can do to the source code to get this done? Thanks in advance for any suggestions.