All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to remove everything before the { character to preserve the JSON format. I am using SEDCMD-keepjson = s/^[^{]{/{/ in the sourcetype configuration, but it fails to apply correctly. However... See more...
I am trying to remove everything before the { character to preserve the JSON format. I am using SEDCMD-keepjson = s/^[^{]{/{/ in the sourcetype configuration, but it fails to apply correctly. However, when I use the search command | rex mode=sed "s/^[^{]{/{/", it successfully removes the unwanted text. I am wondering what could be causing this issue. The sourcetype settings are configured on both the Search Head (SH) and Heavy Forwarder (HF) Mar 28 13:11:57 abcdeabcdev01w.abcdabcd.local {<json_log>}  
I would like to extract an ip address from a text field where the ip address has a trailing port number. The text is like so:  X-Upstream:"11.111.11.11:81" The extraction would provide only the ip ... See more...
I would like to extract an ip address from a text field where the ip address has a trailing port number. The text is like so:  X-Upstream:"11.111.11.11:81" The extraction would provide only the ip address. Thanks.
Hello Splunk Community, I'm seeking help regarding an issue I’m facing. The main problem is that vulnerability detection data is not showing up in my Splunk dashboard. Wazuh is installed and runni... See more...
Hello Splunk Community, I'm seeking help regarding an issue I’m facing. The main problem is that vulnerability detection data is not showing up in my Splunk dashboard. Wazuh is installed and running correctly, and other data appears to be coming through, but the vulnerability detection events are missing. I've verified that: Wazuh services are running properly without critical errors. Vulnerability Detector is enabled in the Wazuh configuration (ossec.conf). Wazuh agents are reporting other types of events successfully. Despite this, no vulnerability data appears in the dashboard. Could someone guide me on how to troubleshoot this? Any advice on checking Wazuh modules, Splunk sourcetypes, indexes, or forwarder configurations would be highly appreciated. Thank you in advance for your support!
Hi everyone, I have 3 indexers (in a cluster) located on Site A. The current replication factor (RF) is set to 3. I need to move operations to Site B. However, for technical reasons, I cannot physi... See more...
Hi everyone, I have 3 indexers (in a cluster) located on Site A. The current replication factor (RF) is set to 3. I need to move operations to Site B. However, for technical reasons, I cannot physically move the existing data — I will simply have 3 new indexers at Site B. Here’s the plan I’m considering: Launch 1 new indexer at Site B. Add the new indexer to the existing cluster. Increase the RF to 4 (so that all raw data is fully replicated across the available indexers). Shut down all 3 indexers at Site A. Decrease the RF back to 3. (I understand there is a risk of some data loss.) Add 2 additional new indexers to the cluster at Site B. My main concern is step 5 — decreasing the RF — which I know is not best practice, but given my situation, I don't have many options. Has anyone encountered a similar situation before? I'd appreciate any advice, lessons learned, or other options I might not be aware of. Thanks in advance!
We have heavy forwarder that accept logs over HEC.  inputs.conf  [http://dd-log-token1] index= ddlogs1 token = XXXXX XXX XXX XXX [http://dd-log-token2] index= ddlogs2 token = XXXXX XXX X... See more...
We have heavy forwarder that accept logs over HEC.  inputs.conf  [http://dd-log-token1] index= ddlogs1 token = XXXXX XXX XXX XXX [http://dd-log-token2] index= ddlogs2 token = XXXXX XXX XXX XXX [http://dd-log-token3] index= ddlogs3 token = XXXXX XXX XXX XXX ________________________________ I want to forward only below inputs to 2 different splunk Instances - 1- splunkCloud (hosted by Splunk) 2-SplunkOnPrem  [http://dd-log-token2] index= ddlogs2 token = XXXXX XXX XXX XXX   ________________________________ This is my inputs.conf looks like  inputs.conf  [http://dd-log-token1] index= ddlogs1 token = XXXXX XXX XXX XXX [http://dd-log-token2] index= ddlogs2 token = XXXXX XXX XXX XXX outputgroup = splunkonprem, splunkcloud [http://dd-log-token3] index= ddlogs3 token = XXXXX XXX XXX XXX _____________ outputs.conf  [tcpout] defaultgroup = splunkonprem,splunkcloud  forceTimebasedAutoLB = true  [tcpout: splunkonprem] server= zyx.com:9997, abc.com:9997 [tcpout: splunkonprem] server= mmm.com:9997, bbb.com:9997 But these settings are only sending logs to Onprem indexers not to SplunkCloud indexers. Please suggest if any idea whats wrong with my configuration.
This just makes things confusing - why do the RPM and DEB versions (both x86 and ARM) and Windows of v9.3.3 have build hash `75595d8f83ef`, but when you look at the solaris UFs, the build hash is  `7... See more...
This just makes things confusing - why do the RPM and DEB versions (both x86 and ARM) and Windows of v9.3.3 have build hash `75595d8f83ef`, but when you look at the solaris UFs, the build hash is  `740e48416363` ?! What gives? This just makes our lives more difficult when trying to organize large-scale downloads for users in a heterogeneous environment...
Splunk gives validation warnings that unknown node submit not allowed here. Is there's any fixes for this <form version="1.1" theme="dark"> <!-- Fieldset for dropdown input --> <fieldset submitBut... See more...
Splunk gives validation warnings that unknown node submit not allowed here. Is there's any fixes for this <form version="1.1" theme="dark"> <!-- Fieldset for dropdown input --> <fieldset submitButton="true" autoRun="true"> <input type="dropdown" token="A_or_B" searchWhenChanged="false"> <label>Select A or B</label> <default>A</default> <choice value="A">A</choice> <choice value="B">B</choice> </input> </fieldset> <!-- Submit block, should be placed directly inside form --> <submit> <condition match="$A_or_B$ == &quot;A&quot;"> <set token="tokenX">1</set> <set token="tokenY">2</set> </condition> <condition match="$A_or_B$ == &quot;B&quot;"> <set token="tokenX">3</set> <set token="tokenY">4</set> </condition> </submit> Microsoft 365 App for Splunk 
Short question: can I configure my window UF inputs.conf to collect Security Event logs as renderXML=false , unless it is EventCode=4662, if EventCode=4662 then I want renderXML=true inputs.conf fi... See more...
Short question: can I configure my window UF inputs.conf to collect Security Event logs as renderXML=false , unless it is EventCode=4662, if EventCode=4662 then I want renderXML=true inputs.conf file [WinEventLog://Security] disabled = 0 index = wineventlog start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 renderXml=false #(if EventCode=4662 then set renderXML=true   I read maybe a transform.conf would help with this...?   Explanation for this configuration request is so that I can utilized this Search for DCSync attacks provided by Enterprise Splunk Security, of which only seems to work with XML ingested Security Event 4662... : ESCU - Windows AD Replication Request Initiated by User Account - Rule `wineventlog_security` EventCode=4662 ObjectType IN ("%{19195a5b-6da0-11d0-afd3-00c04fd930c9}","domainDNS") AND Properties IN ("*Replicating Directory Changes All*", "*{1131f6ad-9c07-11d1-f79f-00c04fc2dcd2}*","*{9923a32a-3607-11d2-b9be-0000f87a36b2}*","*{1131f6ac-9c07-11d1-f79f-00c04fc2dcd2}*") AND AccessMask="0x100" AND NOT (SubjectUserSid="NT AUT*" OR SubjectUserSid="S-1-5-18" OR SubjectDomainName="Window Manager" OR SubjectUserName="*$") | stats min(_time) as _time, count by SubjectDomainName, SubjectUserName, Computer, Logon_ID, ObjectName, ObjectServer, ObjectType, OperationType, status dest | rename SubjectDomainName as Target_Domain, SubjectUserName as user, Logon_ID as TargetLogonId, _time as attack_time | appendpipe [| map search="search `wineventlog_security` EventCode=4624 TargetLogonId=$TargetLogonId$" | fields - status] | table attack_time, AuthenticationPackageName, LogonProcessName, LogonType, TargetUserSid, Target_Domain, user, Computer, TargetLogonId, status, src_ip, src_category, ObjectName, ObjectServer, ObjectType, OperationType, dest | stats min(attack_time) as _time values(TargetUserSid) as TargetUserSid, values(Target_Domain) as Target_Domain, values(user) as user, values(Computer) as Computer, values(status) as status, values(src_category) as src_category, values(src_ip) as src_ip by TargetLogonId dest
we've ran into two odd issues when testing a Shibboleth implementation, but I'm not sure if they are related. AQRs are setup so users are not cached, we noticed that a user without content only show ... See more...
we've ran into two odd issues when testing a Shibboleth implementation, but I'm not sure if they are related. AQRs are setup so users are not cached, we noticed that a user without content only show on a search head the load balancer has sent them to and therefore content cannot be assigned to them unless they've accessed all the search heads. along the content line, replicate certificates does not do what it says. it does not replicate the idp cert across the search heads, but as soon as that was enabled does content, users, and saml groups replicate peer-to-peer. I assume we have an incorrect setting in place, but any help is very much appreciated!
Good Day. I've browsed for some time the official documentation and the forum, and I haven't found exactly the answer I need, so... this is my question (it applies to HF and Enterprise). I would li... See more...
Good Day. I've browsed for some time the official documentation and the forum, and I haven't found exactly the answer I need, so... this is my question (it applies to HF and Enterprise). I would like to limit the internet access of my HF. Over the months, two possible connections come to my mind: Updating Splunk Updating Plugins from splunkbase After some reseach, I haven't found what IP addresses or URL are the right ones to configure in the firewall. Any help?
Hi Splunkers, I recently noticed an issue while opening dashboards—both default and custom app dashboards—in Splunk. I'm encountering a console error that seems to be related to a JavaScript file na... See more...
Hi Splunkers, I recently noticed an issue while opening dashboards—both default and custom app dashboards—in Splunk. I'm encountering a console error that seems to be related to a JavaScript file named layout.js. The error: Failed to load resource: the server responded with a status of 404 (Not Found) :8000/en-US/splunkd/__raw/servicesNS/sanjai/search/static/appLogo.png:1     The screenshots above are taken from the browser’s developer console—specifically the Network tab. I'm unsure why this is occurring, as I haven’t made any recent changes to the static resources. Has anyone else run into a similar issue? Would appreciate any insights!
Hi Splunkers! I'm currently working on a project where the goal is to visualize various KPIs in Splunk based on Jira ticketing data. In our setup, Jira is managed separately and contains around ... See more...
Hi Splunkers! I'm currently working on a project where the goal is to visualize various KPIs in Splunk based on Jira ticketing data. In our setup, Jira is managed separately and contains around 10 different projects. I'm considering multiple integration methods: Using Jira Issue Input Add-on Using Splunk for JIRA Add-on Before diving in, I'd love to hear from your experiences: Which method do you recommend for efficient and reliable integration? Any specific limitations or gotchas I should be aware of? Is the Jira Issue Input Add-on scalable enough for multiple Jira projects? Thanks in advance for your insights!
I've been asked to assist another department with getting their Splunk configuration working with windows UFs. They have a single Linux-based 9.4.1 indexer that is successfully fed by a large number ... See more...
I've been asked to assist another department with getting their Splunk configuration working with windows UFs. They have a single Linux-based 9.4.1 indexer that is successfully fed by a large number of Linux UFs. For the most part I haven't found anything really odd about it. They are using self-signed certs that have several years of validity left on them. FTR, I am not a windows admin so I am kind of grasping at straws here. Both their 'nix and windows UFs use Splunk's Deployment Server for configuration. All UFs are using the same fwd_to_loghost and ssl_bundle apps, the only difference is windowsconf_global or linux_global apps, as appropriate (I have verified the correct app is installed). They made an attempt a year or so to get this working, with no success. I believe I've removed all trace of it and have removed and reinstalled the UF (using 9.4.1 this time) on the windows host from scratch. The windows box connects to the Deployment Server and downloads the apps (fwd_to_loghost, ssl_bundle, and windowsconf_global) correctly but when it tries to connect to the indexer to send logs it fails. The indexer says: ERROR TcpInputProc [2957596 FwdDataReceiverThread-0] - Error encountered for connection from src=[redacted, correct IP address]:49902. error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol The windows box has some interesting things to say in C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log: 04-24-2025 14:03:59.924 -0700 INFO TcpOutputProc [2948 parsing] - Initializing connection for non-ssl forwarding to loghost.biostat.washington.edu:9997 ... 04-24-2025 14:03:59.940 -0700 INFO CertificateData [2948 parsing] - channel=Forwarder, subject="emailAddress=[redacted],CN=loghost-uf.biostat.washington.edu,OU=Biostatistics,O=University of Washington,L=Seattle,ST=Washington,C=US", subjectAltName="DNS:keller-uf, DNS:keller-uf.biostat.washington.edu, DNS:loghost-uf, DNS:loghost-uf.biostat.washington.edu", serial=10, notValidBefore=1623099653, notValidAfter=1938459653, issuer="/C=US/ST=Washington/L=Seattle/O=UW/OU=Biostatistics/CN=zwickel.biostat.washington.edu/emailAddress=bite@uw.edu", sha256-fingerprint=10:31:07:BF:21:F2:49:41:34:E4:53:7F:89:C0:CB:81:99:6E:16:00:29:3E:C4:BC:C3:88:A1:CC:92:D0:AD:32 ... 04-24-2025 14:04:00.362 -0700 WARN X509Verify [5944 HTTPDispatch] - X509 certificate (O=SplunkUser,CN=SplunkServerDefaultCert) should not be used, as it is issued by Splunk's own default Certificate Authority (CA). This puts your Splunk instance at very high-risk of the MITM attack. Either commercial-CA-signed or self-CA-signed certificates must be used; see: <http://docs.splunk.com/Documentation/Splunk/latest/Security/Howtoself-signcertificates> 04-24-2025 14:04:00.381 -0700 INFO CertificateData [5944 HTTPDispatch] - channel=HTTPServer, subject="O=SplunkUser,CN=SplunkServerDefaultCert", subjectAltName="", serial=9814D004673F8828, notValidBefore=1745011134, notValidAfter=1839619134, issuer="/C=US/ST=CA/L=San Francisco/O=Splunk/CN=SplunkCommonCA/emailAddress=support@splunk.com", sha256-fingerprint=DC:75:CA:ED:54:2A:28:12:D4:A1:B9:DC:37:29:75:F4:9B:56:1F:A2:C7:33:BB:EB:EF:02:37:AC:6E:81:E4:CA I am not seeing anything in the log before the non-ssl line that appears to be an error, though it is a noisy log so it is quite possible I missed something. I have my working splunk configuration with functional Windows and Linux UFs that I am trying to base this work on. It does not have the non-ssl or SplunkServerDefaultCert log entries. I presume both are Bad Signs<tm>. Both my working system and this one have sslRootCAPath set in deployment-apps/fwd_to_loghost/default/outputs.conf: [tcpout] defaultGroup = splunkssl [tcpout:splunkssl] compressed = true server = loghost.biostat.washington.edu:9997 clientCert = $SPLUNK_HOME/etc/apps/ssl_bundle/default/UF/loghost-uf-bundle.crt sslPassword = [redacted] sslRootCAPath = $SPLUNK_HOME/etc/apps/ssl_bundle/default/biostat-ca.crt sslVerifyServerCert = true neither of them [had] sslRootCAPath set anywhere else in deployment-apps. I've tried adding a deployment-apps/windowsconf_global/default/server.conf, though ConfigureSplunkforwardingtousesignedcertificates seems to say this is only needed for non-windows hosts: [sslConfig] sslRootCAPath = $SPLUNK_HOME/etc/apps/ssl_bundle/default/biostat-ca.crt but the "unknown protocol" errors and non-ssl and SplunkServerDefaultCert log entries persist. As I said, I'm not a windows admin but given the windows hosts in the working environment are fine with paths like "$SPLUNK_HOME/etc/apps/ssl_bundle/default/..." in outputs.conf and there is a reference to a clearly self-signed cert in the log I have to presume these path entries are valid and working so it should be finding both the cert and the bundle. I've looked at the output of btool server & btool outputs, comparing it with the working instance, and I don't see any obvious or glaring problems. The new server.conf entry shows up in the output of btool server list so it is being seen but not having any impact on the problem. I presume the "unknown protocol" is because the windows UF is trying to use a non-ssl connection, per the UF's log file entry. I've read (and re-read, and re-re-read) https://docs.splunk.com/Documentation/Splunk/9.4.1/Security/ConfigureSplunkforwardingtousesignedcertificates and several forum posts that seem to be about this kind of problem but so far nothing seems to have addressed it. I have to try not to break the linux UFs that are working so I have to be careful what files I touch in deployment-apps - I'm trying to limit myself to only modifying things in windowsconf_global when possible. Where should I look to try to resolve this problem? Given the Linux UFs are working fine I presume the problem is somewhere in the config for the Windows UF. Thanks in advance for any assistance.
​ There is extra contextual data for the Malware Detection events that is needed in order to properly start an investigation into the alerts Some of the additional contextual elements needed are in... See more...
​ There is extra contextual data for the Malware Detection events that is needed in order to properly start an investigation into the alerts Some of the additional contextual elements needed are in the “C:\ProgramData\Veeam\Backup\Malware_Detection_Logs\” directory When setting up a SOC to respond to these alerts you need this information embedded directly into the Detection Alert from your SIEM This covers only the “Malware_Detection_Logs” directory, there are other contextual data sources needed that will be covered in another post First ensure baseline logging and dashboards are setup.  Veeam has actually done a very good job of making this easy  setup syslog for VBR Specifying Syslog Servers install Splunk app in your Splunk environment Veeam App for Splunk You then need to install a Splunk Universal Forwarder on the VBR server to collect the additional context data: Download Universal Forwarder for Remote Data Collection | Splunk create a custom Splunk file monitor input to collect data from the "Malware_Detection_Logs" directory (work with your Splunk admin if you need help) Now you are ready to correlate the Malware Detection events with their contextual data in a custom Detection Alert for your SOC: only the “ObjectName” field is present in both the Malware Detection event and the log files you need to use the timestamp and the ObjectName to correlate these data sources together THERE IS A LOT OF NUANCE IN THE <snip>ed SECTION BELOW (I can help if you need it, but it was too much to post and needed to be sanitized anyway) You will now see contextual data embedded directly into the SOC alert (we were doing some testing and set .txt and .pdf as bad file extensions just to generate data
I want to replace hard coded text "Today" by current system date in splunk report. Please help if it is possible. Please see the attachment.
We have one index os_linux which has 2 source type and i see props and transform is written . can you help me to understand how its working . linux:audit Linux_os_syslog   props.conf [L... See more...
We have one index os_linux which has 2 source type and i see props and transform is written . can you help me to understand how its working . linux:audit Linux_os_syslog   props.conf [Linux_os_syslog] TIME_PREFIX = ^ TIME_FORMAT = %b %d %H:%M:%S MAX_TIMESTAMP_LOOKAHEAD = 15 SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TRUNCATE = 2048 TZ = US/Eastern Transforms.conf [linux_audit] DEST_KEY = MetaData:Sourcetype REGEX = type=\S+\s+msg=audit FORMAT = sourcetype::linux:audit [auditd_node] REGEX = \snode=(\S+) FORMAT = host::$1 DEST_KEY = MetaData:Host  
Good afternoon Splunk Team, I have my search query: index=example_mine  host=x.x.x.x  [ | inputlookup  myfiile.csv | return 10000 $myfile] logins="successfully logged in"  Search was last 7 days. ... See more...
Good afternoon Splunk Team, I have my search query: index=example_mine  host=x.x.x.x  [ | inputlookup  myfiile.csv | return 10000 $myfile] logins="successfully logged in"  Search was last 7 days. I have received returns of everyone who successfully logged in. I need to put the results in a nice table format where X=each user and Y=time. Any help would be appreciated. v/r CMAz
We have a Splunk app that includes multiple scripted inputs. The app is deployed to 15 heavy forwarders, but we want one of the scripts to run on only one of them. I first tried adding host = <host... See more...
We have a Splunk app that includes multiple scripted inputs. The app is deployed to 15 heavy forwarders, but we want one of the scripts to run on only one of them. I first tried adding host = <hostname> inside the scripted‑input stanza, but I now realize that this isn't the solution. Is there a way to restrict a scripted input so it executes on only a single server, without having to split the app?
Hi all, I have a situation. Below is my search. Search needs to produce past 6 months of report. The goal is to produce ZEROs for the months with no events. However, below search is producing result... See more...
Hi all, I have a situation. Below is my search. Search needs to produce past 6 months of report. The goal is to produce ZEROs for the months with no events. However, below search is producing results with ZEROs for the whole year instead of just 6 months. How to make it do only for 6 months? Thank you! Search: index=sample_index sourcetype=sample_sourcetype AcknowledgedServiceAccount="No" System="ABC" | eval ScanMonth_Translate=case( ScanMonth="1","January", ScanMonth="2","February", ScanMonth="3","March", ScanMonth="4","April", ScanMonth="5","May", ScanMonth="6","June", ScanMonth="7","July", ScanMonth="8","August", ScanMonth="9","September", ScanMonth="10","October", ScanMonth="11","November", ScanMonth="12","December") | fields ID, System, GSS, RemediationAssignment, Environment, SeverityCode, ScanYear, ScanMonth | fillnull value="NULL" ID, System, GSS, RemediationAssignment, Environment, SeverityCode, ScanYear, ScanMonth | foreach System Group Environment ScanMonth, ScanYear, SeverityCode [| eval <<FIELD>> = split(<<FIELD>>, "\n") | eval <<FIELD>> = split(<<FIELD>>, "\n") | eval <<FIELD>> = split(<<FIELD>>, "\n") | eval <<FIELD>> = split(<<FIELD>>, "\n") | eval <<FIELD>> = split(<<FIELD>>, "\n") | eval <<FIELD>> = split(<<FIELD>>, "\n") ] | stats count AS Total_Vulnerabilities BY ScanMonth, ScanYear, System, Group, Environment, SeverityCode | fields System, Group, ScanMonth, ScanYear, Environment, SeverityCode, Total_Vulnerabilities | stats values(eval(if(SeverityCode="1 CRITICAL",Total_Vulnerabilities, null()))) as "4_CRITICAL" values(eval(if(SeverityCode="2 HIGH",Total_Vulnerabilities, null()))) as "3_HIGH" values(eval(if(SeverityCode="3 MEDIUM",Total_Vulnerabilities, null()))) AS "2_MEDIUM" values(eval(if(SeverityCode="4 LOW",Total_Vulnerabilities, null()))) as "1_LOW" sum(Total_Vulnerabilities) AS TOTAL by System, Group, ScanMonth, ScanYear, Environment | fillnull value="0" 4_CRITICAL, 3_HIGH, 2_MEDIUM, 1_LOW | fields System, Group, Environment, ScanMonth, ScanYear, 4_CRITICAL, 3_HIGH, 2_MEDIUM, 1_LOW, TOTAL | replace "*PROD*" WITH "1_PROD" IN Environment | replace "*DR*" WITH "2_DR" IN Environment | replace "*TEST*" WITH "3_TEST" IN Environment | replace "*DEV*" WITH "4_DEV" IN Environment | sort 0 + System, GSS, Environment, ScanMonth, ScanYear | append [| makeresults | eval ScanMonth="1,2,3,4,5,6,7,8,9,10,11,12" | eval 4_CRITICAL="0" | eval 3_HIGH="0" | eval 2_MEDIUM="0" | eval 1_LOW="0" | eval TOTAL="0" | makemv delim="," ScanMonth | stats count by ScanMonth, 4_CRITICAL, 3_HIGH, 2_MEDIUM, 1_LOW, TOTAL | fields - count ] | fillnull value="0" 4_CRITICAL, 3_HIGH, 2_MEDIUM, 1_LOW, TOTAL | filldown | stats sum(TOTAL) AS TOTAL sum(1_LOW) AS 1_LOW sum(2_MEDIUM) AS 2_MEDIUM sum(3_HIGH) AS 3_HIGH sum(4_CRITICAL) AS 4_CRITICAL by System, Group, ScanMonth, ScanYear, Environment | sort 0 + System, Group, Environment, ScanMonth, ScanYear Output: System Group ScanMonth ScanYear Environment TOTAL 1_LOW 2_MEDIUM 3_HIGH 4_CRITICAL A1234 GSS-27 2 2025 3_TEST 216 2 28 155 31 A1234 GSS-27 3 2025 3_TEST 430 4 56 308 62 A1234 GSS-27 1 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 2 2025 4_DEV 222 2 28 161 31 A1234 GSS-27 3 2025 4_DEV 444 4 56 322 62 A1234 GSS-27 4 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 5 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 6 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 7 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 8 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 9 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 10 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 11 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 12 2025 4_DEV 0 0 0 0 0 Desired Output: System Group ScanMonth ScanYear Environment TOTAL 1_LOW 2_MEDIUM 3_HIGH 4_CRITICAL A1234 GSS-27 1 2025 3_TEST 0 0 0 0 0 A1234 GSS-27 2 2025 3_TEST 221 3 4 214 0 A1234 GSS-27 3 2025 3_TEST 430 4 56 308 62 A1234 GSS-27 10 2024 3_TEST 0 0 0 0 0 A1234 GSS-27 11 2024 3_TEST 0 0 0 0 0 A1234 GSS-27 12 2024 3_TEST 5 1 2 0 2 A1234 GSS-27 1 2025 4_DEV 10 5 2 2 1 A1234 GSS-27 2 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 3 2025 4_DEV 0 0 0 0 0 A1234 GSS-27 10 2024 4_DEV 12 4 3 2 3 A1234 GSS-27 11 2024 4_DEV 20 10 5 2 3 A1234 GSS-27 12 2024 4_DEV 0 0 0 0 0
Can Splunk read a CSV file located on a remote server using a forwarder and automatically upload it as a lookup? what i know there is two option, upload csv as lookup or read line by line from the f... See more...
Can Splunk read a CSV file located on a remote server using a forwarder and automatically upload it as a lookup? what i know there is two option, upload csv as lookup or read line by line from the file as a log