All Topics

Top

All Topics

Hi all I have 2 scenarios: We ingest logs (windows, linux) using the Splunk agent. Ingest logs from flat files using the Splunk agent   I've been asked to check whether the Splunk agent has an... See more...
Hi all I have 2 scenarios: We ingest logs (windows, linux) using the Splunk agent. Ingest logs from flat files using the Splunk agent   I've been asked to check whether the Splunk agent has any log integrity checking feature. Does the Splunk agent (or any other component in Splunk ES) check that the logs have not been tampered with in transit?  Thanks J  
Im trying to create a role for a developer in our organization where the developer is only allowed to view the dashboard which is created by the admin or the person who has edit_own_objects capablity... See more...
Im trying to create a role for a developer in our organization where the developer is only allowed to view the dashboard which is created by the admin or the person who has edit_own_objects capablity attached to his role.... when I created a role for developer which has the below capablities attached to its role: capabilities = [   "search",   "list_all_objects",   "rest_properties_get",   "embed_report" ] Now when I login as a developer and when I try viewing the dashboards its visible and its in read mode only but the developer can create new dashboards also which shouldnt be allowed. How can i restrict developer from creating a new dashboard? And also automatically the below capablities gets added to the role along with the ones which ive specified above: run_collect run_mcollect schedule_rtsearch edit_own_objects Ive also given read access in the specific dashboard permissions setting for the developers role only..
Relanto@DESKTOP-FRSRLVP MINGW64 ~ $ curl -k -u admin:adminadmin https://localhost:8089/servicesNS/admin/search/data/ui/panels -d "name=user_login_panel&eai:data=<panel><label>User Login Stats</l... See more...
Relanto@DESKTOP-FRSRLVP MINGW64 ~ $ curl -k -u admin:adminadmin https://localhost:8089/servicesNS/admin/search/data/ui/panels -d "name=user_login_panel&eai:data=<panel><label>User Login Stats</label></panel>" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3990 100 3913 100 77 12955 254 --:--:-- --:--:-- --:--:-- 13255<?xml version="1.0" encoding="UTF-8"?> <!--This is to override browser formatting; see server.conf[httpServer] to disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .--> <?xml-stylesheet type="text/xml" href="/static/atom.xsl"?> <feed xmlns="http://www.w3.org/2005/Atom" xmlns:s="http://dev.splunk.com/ns/rest" xmlns:opensearch="http://a9.com/-/spec/opensearch/1.1/"> <title>panels</title> <id>https://localhost:8089/servicesNS/admin/search/data/ui/panels</id> <updated>2024-12-03T12:27:38+05:30</updated> <generator build="0b8d769cb912" version="9.3.1"/> <author> <name>Splunk</name> </author> <link href="/servicesNS/admin/search/data/ui/panels/_new" rel="create"/> <link href="/servicesNS/admin/search/data/ui/panels/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/data/ui/panels/_acl" rel="_acl"/> <opensearch:totalResults>1</opensearch:totalResults> <opensearch:itemsPerPage>30</opensearch:itemsPerPage> <opensearch:startIndex>0</opensearch:startIndex> <s:messages/> <entry> <title>user_login_panel</title> <id>https://localhost:8089/servicesNS/admin/search/data/ui/panels/user_login_panel</id> <updated>2024-12-03T12:27:38+05:30</updated> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="alternate"/> <author> <name>admin</name> </author> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="list"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel/_reload" rel="_reload"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="edit"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel" rel="remove"/> <link href="/servicesNS/admin/search/data/ui/panels/user_login_panel/move" rel="move"/> <content type="text/xml"> <s:dict> <s:key name="disabled">0</s:key> <s:key name="eai:acl"> <s:dict> <s:key name="app">search</s:key> <s:key name="can_change_perms">1</s:key> <s:key name="can_list">1</s:key> <s:key name="can_share_app">1</s:key> <s:key name="can_share_global">1</s:key> <s:key name="can_share_user">1</s:key> <s:key name="can_write">1</s:key> <s:key name="modifiable">1</s:key> <s:key name="owner">admin</s:key> <s:key name="perms"/> <s:key name="removable">1</s:key> <s:key name="sharing">user</s:key> </s:dict> </s:key> <s:key name="eai:appName">search</s:key> <s:key name="eai:data"><![CDATA[<panel><label>User Login Stats</label></panel>]]></s:key> <s:key name="eai:digest">6ad60f5607b5d1dd50044816b18d139b</s:key> <s:key name="eai:userName">admin</s:key> <s:key name="label">User Login Stats</s:key> <s:key name="panel.title">user_login_panel</s:key> <s:key name="rootNode">panel</s:key> </s:dict> </content> </entry> </feed> Relanto@DESKTOP-FRSRLVP MINGW64 ~ $ I have created the panel using the Rest api splunk doccumentation.. https://docs.splunk.com/Documentation/Splunk/7.2.0/RESTREF/RESTknowledge?_gl=1*5lyxk4*_gcl_au*MTY2MTE2NDE1Ni4xNzI4ODI5MDM1*FPAU*MTY2MTE2NDE1Ni4xNzI4ODI5MDM1*_ga*NDU2NzA4MDU0LjE3Mjg4MjkwMzU.*_ga_5EPM2P39FV*MTczMTMxNDgwOC42OC4xLjE3MzEzMTQ4MjIuNDYuMC45MjMyNTUzMTE.*_fplc*ZDZBQlJUQXM5UjkzY3lLQTMlMkZyZjdBNnlmMUE1bzg2TEc1JTJGc1hMbWc5RUFYMjR1V2lLdDBabjJzUmlYZzJSZXp4VkhzRU8wOUg4OVJKb1JFbWtMMnloYnR4NGRzJTJGVjR3NkdyJTJGeUl5SlBLejJyMWo3RE8lMkJhT0R0a3B1cjRIdyUzRCUzRA..#data.2Fui.2Fpanels) After creating the panel its not showing in my Splunk enterprises UI. What is the actual use of this????
Hi,  from splunk, how can i check what are the logs is being forwarded out to another SIEM? output.conf is configured to forward syslog, what does the syslog containing?
I have a table that looks like this   Day Percent 2024-11-01 100 2024-11-02 99.6 2024-11-03 94.2 ... ... 2024-12-01 22.1 2024-12-02 19.0   From this table I am calc... See more...
I have a table that looks like this   Day Percent 2024-11-01 100 2024-11-02 99.6 2024-11-03 94.2 ... ... 2024-12-01 22.1 2024-12-02 19.0   From this table I am calculating three fields: REMEDIATION_50, _80, and _100 using the following   |eval REMEDIATION_50 = if(PERCENTAGE <= 50, "x", "")     From this eval statement, I am going to have multiple rows where the _50, and _80 rows are marked, and some where both fields are marked.  I'm interested in isolating the DAY of the first time each of these milestones are hit.  I've yet to craft the right combination of stats, where, and evals that gets me what I want. In the end, I'd like to get to this of sorts Start 50% 80% 100% 2024-11-01 2024-11-23 2024-12-02 -   Any help would be appreciated, thanks!
I have created a lookup table in Splunk that contains a column with various regex patterns intended to match file paths. My goal is to use this lookup table within a search query to identify events w... See more...
I have created a lookup table in Splunk that contains a column with various regex patterns intended to match file paths. My goal is to use this lookup table within a search query to identify events where the path field matches any of the regex patterns specified in the Regex_Path column. lookupfile:   Here is the challenge I'm facing: When using the match() function in my search query, it only successfully matches if the Regex_Path pattern completely matches the path field in the event. However, I expected match() to perform partial matches based on the regex pattern, which does not seem to be the case. Interestingly, if I manually replace the Regex_Path in the where match() clause with the actual regex pattern, it successfully performs the match as expected. Here is an example of my search query: index=teleport event="sftp" path!="" | eval path_lower=lower(path) | lookup Sensitive_File_Path.csv Regex_Path AS path_lower OUTPUT Regex_Path, Note | where match(path_lower, Regex_Path) | table path_lower, Regex_Path, Note I would like to understand why the match() function isn't working as anticipated when using the lookup table and whether there is a better method to achieve the desired regex matching. Any insights or suggestions on how to resolve this issue would be greatly appreciated.
Hello, dear Splunk Community. I am trying to extract the ingest volume from our client's search head, but I noticed that I am getting different results depending on which method I am using. For exa... See more...
Hello, dear Splunk Community. I am trying to extract the ingest volume from our client's search head, but I noticed that I am getting different results depending on which method I am using. For example, if a run the following query: index=_internal source=*license_usage.log* type="Usage" | eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) | eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) | eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) | eval GB=round(b/1024/1024/1024, 3) | timechart sum(GB) as Volume span=1d     I get the following table: _time Volume 2024-11-25 240.489 2024-11-26 727.444 2024-11-27 751.526 2024-11-28 777.469 2024-11-29 727.366 2024-11-30 724.419 2024-12-01 787.632 2024-12-02 587.710   On the other hand, when I got to Apps > CMC > License usage > Ingest, and fetch the data for "last 7 days" (same as above) I get the following table: _time GB 2024-11-25 851.012 2024-11-26 877.134 2024-11-27 872.973 2024-11-28 949.041 2024-11-29 939.627 2024-11-30 835.154 2024-12-01 955.316 2024-12-02 963.486   As you can see, there is a considerable mismatch between both results. So here's where I'm at a crossroad because I don't know which one should I trust. Based on previous topics, I notice the above query has been recommended before, even in posts from 2024. I don't know if this is related to my user not having the appropriate capabilities or whatnot, but any insights about this issue are greatly appreciated. Cheers, everyone.
Using the below sample search I'm trying to get every possible combination of results between two different sets of data and interested if there are any good techniques for doing so that are relative... See more...
Using the below sample search I'm trying to get every possible combination of results between two different sets of data and interested if there are any good techniques for doing so that are relatively efficient.  At least with the production data set I'm working with it should translate to about 40,000 results.  Below is just an example to make the data set easier to understand.  Thank you in advance for any assistance. Sample search | makeresults | eval new_set="A,B,C" | makemv delim="," new_set | append [| makeresults | eval baseline="X,Y,Z" ] | makemv delim="," baseline Output should be roughly in the format below and I'm stuck on getting the data manipulated in a way that aligns with the below. new_set - baseline -- A-X A-Y A-Z B-X B-Y B-Z C-X C-Y C-Z
Hello Splunk Community,  I was wondering if anyone has been successful in setting up the Microsoft Teams Add-on for Splunk app in their Enterprise/Heavy Forwarder. This application requires configur... See more...
Hello Splunk Community,  I was wondering if anyone has been successful in setting up the Microsoft Teams Add-on for Splunk app in their Enterprise/Heavy Forwarder. This application requires configuring a Teams webhook. When reading the documentation it appears that the app is supposed to create or include the Microsoft Teams-specific webhook. However, when I attempt to search for the Webhook in the search app using:  sourcetype="m365:webhook" I don't get anything back and I'm not sure what the Webhook address is since document doesn't specify the format or go over the steps to create a Webhook address.  I followed these steps: https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_the_Microsoft_Teams_Add-on_for_Splunk If anyone has an idea on how to create the Webhook or has an idea what I am doing wrong, I would greatly appreciate it.  Thanks!
Remove Blue Dot In Dashboard Studio, my panels use a parent search which uses a multisearch. Because of this, all of the panels have this annoying informational blue dot that appears until the sea... See more...
Remove Blue Dot In Dashboard Studio, my panels use a parent search which uses a multisearch. Because of this, all of the panels have this annoying informational blue dot that appears until the search completely finishes. How can I get rid of this so it never appears? 
In November, the Splunk Threat Research Team had one release of new security content via the Enterprise Security Content Update (ESCU) app (v4.43.0). With this release, there are 2 new analytic stori... See more...
In November, the Splunk Threat Research Team had one release of new security content via the Enterprise Security Content Update (ESCU) app (v4.43.0). With this release, there are 2 new analytic stories and 9 new analytics now available in Splunk Enterprise Security via the ESCU application update process. Content highlights include: Braodo Stealer analytics story: This includes detections to help identify the Braodo Stealer malware, which is designed to steal sensitive information like credentials, cookies, and system data. To learn more about Braodo Stealer and the detections included in this analytics story, check out the team’s blog ”Cracking Braodo Stealer: Analyzing Python Malware and Its Obfuscated Loader.” Enhanced drilldowns: In addition, all TTP or Anomaly and Correlation type detections have had two drilldowns added to their yaml files. The drilldowns let users view detection results for specific risk objects and access risk events from the past 7 days. New Analytic Stories (2) Braodo Stealer Critical Alerts New Analytics (9) Detect Critical Alerts from Security Tools High Volume of Bytes Out to Url Internal Horizontal Port Scan NMAP Top 20 Plain HTTP POST Exfiltrated Data Windows Archived Collected Data In TEMP Folder Windows Credentials from Password Stores Chrome Copied in TEMP Dir Windows Credentials from Web Browsers Saved in TEMP Folder Windows Disable or Stop Browser Process Windows Screen Capture in TEMP folder The team also published the following 4 blogs: Cracking Braodo Stealer: Analyzing Python Malware and Its Obfuscated Loader CosmicSting: A Critical XXE Vulnerability in Adobe Commerce and Magento (CVE-2024-34102) Bypassing the Bypass: Detecting Okta Classic Application Sign-On Policy Evasion Splunk Security Content for Threat Detection & Response: Q3 Roundup For all our tools and security content, please visit research.splunk.com. — The Splunk Threat Research Team
We have a lookup in Splunk that we are looking to send a few columns in the lookup to another product via a POST API call. My question is, are there any Splunk add-ons that i can leverage to do this?... See more...
We have a lookup in Splunk that we are looking to send a few columns in the lookup to another product via a POST API call. My question is, are there any Splunk add-ons that i can leverage to do this? I see there is an HTTP alert action that can make a POST, however with this being a lookup (csv) i am not sure it will work correctly. 
I recently migrated from v8 to v9 for Splunk and I am having issues with ldapsearch not returning data that it had previously returned. I am trying to pull lastLogon for accurate tracking but this at... See more...
I recently migrated from v8 to v9 for Splunk and I am having issues with ldapsearch not returning data that it had previously returned. I am trying to pull lastLogon for accurate tracking but this attribute will not return anything. lastLogontimestamp works but is too far out of sync for my requirements on reporting. I have LDAP configuration in the Active Directory add-on set to 3269 and everything else works fine except this one attribute. I setup delegation to read lastLogonTimestamp and then everything so its not a permissions issue from what I can see. Any help would be appreciated. 
Hello, I need help on passing a field value from a Dashboard table into a "Link to search" drilldown but can't figure it out. I have a table that contains a "host" field.  I am needing to be able t... See more...
Hello, I need help on passing a field value from a Dashboard table into a "Link to search" drilldown but can't figure it out. I have a table that contains a "host" field.  I am needing to be able to click on any of the returned hosts and drill into all of the events for that host.   I've tried in hopes that the $host$ would be replaced with the actual host name with this drilldown query: source="udp:514" host="$host$.doman.com" but, of course failed, it just get's replaced with "*". I'm sure I'm probably way off on how to do this, but any help would be awesome.   Thanks in advance. Tom
How to identify Stream_event function is called at time interval or during create/edit data input. 
Hello everyone, I am terrible at regex,  I am trying to regex a field called "alert.message" to create another field with only the contents of alert.message after "On-Prem - ".  I can achieve this i... See more...
Hello everyone, I am terrible at regex,  I am trying to regex a field called "alert.message" to create another field with only the contents of alert.message after "On-Prem - ".  I can achieve this in regex101 with: (?<=On-Prem - ).* But, I know in splunk we have to give it a field name.  I can't figure out the correct syntax to add the field name so it would work. In example of one I've tried without success: rex field="alert.message" "\?(?<Name><=On Prem - ).*" If possible, could someone help me out with this one ? Thanks for any help, Tom  
So I want to build a dashboard with _introspection index , some of the metrics I am looking for are THP (enabled/disabled), Ulimits, CPU, Mem, Disk usage, swap usage, clocks sync (realtime & hardware... See more...
So I want to build a dashboard with _introspection index , some of the metrics I am looking for are THP (enabled/disabled), Ulimits, CPU, Mem, Disk usage, swap usage, clocks sync (realtime & hardware) etc. I couldnt find any solid documentation for _introspection index as to under which source, component these variables will be stored also what all data is available in the index.  Can someone please point me to a doumented list of all the data points in the index if any docs exists. Also any specific component/source I can find the KPIs I mentioned above.
Hi,  I have a log file on the server which I ingested in splunk through input app where I defined the index , sourcetype and monitor statement in inputs.conf. Log file on the server looks like below... See more...
Hi,  I have a log file on the server which I ingested in splunk through input app where I defined the index , sourcetype and monitor statement in inputs.conf. Log file on the server looks like below: xyz asdfoasdf asfanfafd ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: sdfsdfja agf[oija[gfojerg fgoaierr apodsifa[soigaiga[oiga[dogj ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: sadfnasd;fiasfdoiasndf'i dfdf fd garehaehseht shse thse tjst ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: asdf;nafdsknasdf asdfknasdfln asdf;nasdkfnasf asogja'fja foj'apogj aogj agf   When I try searching the log file in splunk, Logs are visible howerver events are not breaking as I expect it to come. I want events to be separated as below   Event 1: xyz asdfoasdf asfanfafd ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: Event 2: sdfsdfja agf[oija[gfojerg fgoaierr apodsifa[soigaiga[oiga[dogj :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::   Event 3: sadfnasd;fiasfdoiasndf'i dfdf fd garehaehseht shse thse tjst ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: Event 4: asdf;nafdsknasdf asdfknasdfln asdf;nasdkfnasf asogja'fja foj'apogj aogj agf :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::  
Hi all, I have 2 events present in a source type, with different data. There is one field which has same data in both the events but the field names are different. Can anyone suggest a method other ... See more...
Hi all, I have 2 events present in a source type, with different data. There is one field which has same data in both the events but the field names are different. Can anyone suggest a method other than JOIN to combine 2 events? I tried combining the fields by coalesce command, once i combine them i was not able to see the combined fields. I want to combine the events and do some calculations.
Good day Splunkers, We have two site/DCs, where one is production and the other a standby DR. In our current architecture, we  have intermediate forwarders that forwards the logs to Splunk Cloud. Al... See more...
Good day Splunkers, We have two site/DCs, where one is production and the other a standby DR. In our current architecture, we  have intermediate forwarders that forwards the logs to Splunk Cloud. All universal forwarders send metrics/logs to these intermediate forwarders. We also have a single deployment server. The architecture is as follows: UF -> IF -> SH (Splunk cloud) The intermediate forwarders are Heavy Forwarders, they do some indexing, and some data transformation such as anonymizing data. The search head is on the cloud. We have been asked to move from the current production-DR architectural setup to an multi-site (active-active) setup. The requirement is for both DCs to be active and servicing customers at the same time. What is your recommendation in terms of setting up the forwarding layer? Is it okay to provision two more intermediate forwarders on the other DC and have all universal forwarders send to all intermediate forwarders across the two DCs. Is there a best practice that you can point me towards. Furthermore, do we need more deployment servers. Extra Info: The network team is about to complete network migration to Cisco ACI.