All Topics

Top

All Topics

Hey there! I've set up Splunk Enterprise using AWS AMI. Now, I'm attempting to install the Splunk Essentials app, but I'm running into some issues. First, when I tried to upload the .tgz zip file, ... See more...
Hey there! I've set up Splunk Enterprise using AWS AMI. Now, I'm attempting to install the Splunk Essentials app, but I'm running into some issues. First, when I tried to upload the .tgz zip file, it got blocked. Then, I attempted to install it through the marketplace, but my correct username and password from splunk.com aren't working. I'm not sure how to fix this. Any help would be appreciated. Thanks!
HI All,   We are trying to install Splunk through the chef script but the installation is stuck and timeout after 20 min. Command We ran as given below /opt/splunkforwarder/bin/splunk enable boot... See more...
HI All,   We are trying to install Splunk through the chef script but the installation is stuck and timeout after 20 min. Command We ran as given below /opt/splunkforwarder/bin/splunk enable boot-start --accept-license --no-prompt --answer-yes  When the Splunk installation script runs on the instance, it always hangs the first time, as the first screenshot. It will then work again if the command runs again in subsequent run. As shown in the 2nd screenshot. Note: After the 1st time it ran, the CPU went to 100%, and the Splunk process next existed. First Run:     Second Run:      
In the current project, we are sending application logs to Splunk, while the splunk-otel-collector is responsible for sending instrumentation logs to SignalFx. The issue arises because we utilize ... See more...
In the current project, we are sending application logs to Splunk, while the splunk-otel-collector is responsible for sending instrumentation logs to SignalFx. The issue arises because we utilize the cloudFrontID as a correlation ID to filter logs in Splunk, whereas SignalFx employs the traceId for log tracing. I am currently facing challenges in correlating the application logs' correlation ID with SignalFx's traceId. I attempted to address this issue by using the "Serilog.Enrichers.Span" NuGet package to log the TraceId and SpanId. However, no values were logged in Splunk. How can I access the TraceId generated by the OpenTelemetry Collector within the ASP.NET web application (Framework version: 4.7.2)? Let me know if further details are required from my end.
Hello All, Logs are not indexing into splunk. My configurations are below  inputs.conf: [monitor:///usr/logs/Client*.log*] index = admin crcSalt = <SOURCE> disabled = false recursive = false ... See more...
Hello All, Logs are not indexing into splunk. My configurations are below  inputs.conf: [monitor:///usr/logs/Client*.log*] index = admin crcSalt = <SOURCE> disabled = false recursive = false props.conf: [source::(...(usr/logs/Client*.log*))] sourcetype = auth_log My logs files pattern: Client_11.186.145.54:1_q1234567.log Client_11.186.145.54:1_q1234567.log.~~ Client_12.187.146.53:2_s1234567.log Client_12.187.146.53:2_s1234567.log.~~ Client_1.1.1.1:2_p1244567.log Client_1.1.1.1:2_p1244567.log.~~ In some of log files it starts with below line: ===== JLSLog: Maximum log file size is 5000000 and then log events So for this one i tried with below config one by one but nothing worked out adding crcSalt=<SOURCE> in monitor stanze, tried with adding SEDCMD in props.conf SEDCMD-removeheadersfooters=s/\=\=\=\=\=\sJLSLog:\s((Maximum\slog\sfile\ssize\sis\s\d+)|Initial\slog\slevel\sis\sLow)//g and tried with regex in transforms.conf transforms.conf [ignore_lines_starting_with_equals] REGEX = ^===(.*) DEST_KEY = queue FORMAT = nullQueue props.conf: [auth_log] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)=== TRANSFORMS-null = ignore_lines_starting_with_equals When i checked in splunkd logs there is no error captured and in list inputstatus it is showing                  percent = 100.00                 type = finished reading / open file please help me out of this issue if anyone faced before and fixed it. but the weird scenario is sometimes only  the first line of log file is indexed  ===== JLSLog: Maximum log file size is 5000000 host/server details: os: Solaris 10 splunk universal forwarder version 7.3.9 splunk enterprise version: 9.1.1 Here restriction is the host os cant be upgraded as of now so i need to strict on 7.3.9 splunk forwarder version.
how can i input zeek log and analyze data
Hello, I have problem in installing Python module on splunk i am getting pip not found error whenever i try to using pip module.. I am not sure what is wrong here.. Please someone help me fig... See more...
Hello, I have problem in installing Python module on splunk i am getting pip not found error whenever i try to using pip module.. I am not sure what is wrong here.. Please someone help me figure this out..
Hi All, I am looking into using some proxy logs to determine download volume for particular streaming sites and was looking for a way to merge hostnames into one "service". Consider the SPL: index... See more...
Hi All, I am looking into using some proxy logs to determine download volume for particular streaming sites and was looking for a way to merge hostnames into one "service". Consider the SPL: index=proxy url=*.streaming-site.com | eval megabytes=round(((bytes_in/1024)/1024),2) | stats sum(megabytes) as Download_MB by url | sort -Download_MB Will likely return multiple rows like: cdn1.streaming-site.com 180.3 cdn2.streaming-site.com 164.8 www.streaming-site.com  12.3 I am wanting to merge those all into one row of streaming-site.com   357.4 I have played around with the coalesce function, but this would be unsustainable for sites like Netflix which have dozens of URLs associated with them. If anyone has any suggestions on how I might combine results with say a wildcard (*), I'd love to hear from you!
Are you ready for an adventure in learning?   Brace yourselves because Splunk University is back, and it's going to be a thrilling, hands-on extravaganza! Mark your calendars for June 9-11, 2024,... See more...
Are you ready for an adventure in learning?   Brace yourselves because Splunk University is back, and it's going to be a thrilling, hands-on extravaganza! Mark your calendars for June 9-11, 2024, as we dive deep into the world of Splunk in fabulous Las Vegas! Splunk University isn't just another run-of-the-mill educational event. It's a chance for our valued customer-users to supercharge their Splunk expertise with immersive experiences, interactive workshops, and networking opportunities that will leave you feeling empowered and inspired.   What to Expect Prepare to embark on a journey filled with learning opportunities tailored to all skill levels. Whether you're a beginner, intermediate, or advanced user, we've got you covered with a diverse range of sessions designed to meet your needs. Here's a glimpse of what awaits you: Hands-on Labs and Bootcamps: Dive deep into the Splunk ecosystem with our extensive lineup of bootcamps, covering everything from Power User Essentials to Advanced Enterprise Administration and beyond. Interactive Workshops: Engage with industry experts and fellow enthusiasts through interactive workshops, breakouts, and demos designed to enhance your understanding of Splunk's capabilities. Networking: Connect with like-minded professionals, share insights, and forge valuable connections that will extend beyond the event.   Certification Opportunities Elevate your Splunk game by validating your skills with Splunk Certification. At .conf24, you'll have the opportunity to take any Splunk certification exam with PearsonVUE on-site for only $25 — a steal considering the usual $130 value. Don't miss this chance to showcase your expertise and stand out in the crowd! And, stay tuned for news about an upcoming inaugural certification. (shhhhh…)   Early Bird Pricing and Registration Act fast to take advantage of our Early Bird pricing, which ends on Tuesday, April 2. Trust us; you don't want to miss out on the savings and the opportunity to secure your spot at Splunk University! All the details, pricing, and packages are here.    Join Us at .conf24 And that's not all! By attending Splunk University, you'll also set yourself up for an unforgettable experience at .conf24. Expand your knowledge, enhance your skills, and make lasting memories with fellow Splunk enthusiasts.   With Splunk University just around the corner, there's no better time to invest in your professional development and take your Splunk expertise to new heights. We can't wait to see you in Las Vegas for an unforgettable learning experience! Learn more about the full conf24 experience here.    Happy Learning! – Callie Skokos on behalf of the Splunk Education Crew
We've been collecting data with the inputs add-on (Input Add On for SentinelOne App For Splunk) for several years now.  The applications channel has always been a bit problematic with the collection ... See more...
We've been collecting data with the inputs add-on (Input Add On for SentinelOne App For Splunk) for several years now.  The applications channel has always been a bit problematic with the collection process running for several days but now we haven't seen any data since Monday February 19th around 5:00 PM. It's February 22nd and we generally see applications data every day. We started seeing errors on February 16th error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="223" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" And have seen a few errors since then error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="188" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="cannot unpack non-iterable NoneType object" error_type="&lt;class 'TypeError'&gt;" error_arguments="cannot unpack non-iterable NoneType object" error_filename="s1_client.py" error_line_number="500" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" error_message="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_type="&lt;class 'management.mgmtsdk_v2.exceptions.InternalServerErrorException'&gt;" error_arguments="[{'code': 5000010, 'detail': 'Server could not process the request.', 'title': 'Internal server error'}]" error_filename="s1_client.py" error_line_number="188" input_guid="8bb303-be5-6fe3-1b6-63a0c52b60c" input_name="Applications" After noting the following in the release notes Improvements ... -- Applications input uses a new S1 API endpoint to reduce load on ingest. we upgraded the add-on from version 5.19 to version 5.20. Now we're seeing the following messages in the sentinelone-modularinput.log 2024-02-22 13:40:02,171 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="630" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=saving_checkpoint msg='not saving checkpoint in case there was a communication error' start=1708026001000 items_found=0 channel=applications 2024-02-22 13:40:01,526 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="599" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=calling_applications_channel status=start start=1708026001000 start_length=13 start_type=<class 'str'> end=1708630801000 end_length=13 end_type=<class 'str'> checkpoint=1708026001.525169 channel=applications 2024-02-22 13:40:01,526 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="580" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=got_checkpoint checkpoint={'last_execution': 1708026001.525169} channel=applications last_execution=1708026001.525169 2024-02-22 13:40:01,525 log_level=WARNING pid=41568 tid=MainThread file="sentinelone.py" function="get_channel" line_number="565" version="IA-sentinelone_app_for_splunk.5.2.0b87" action=got_checkpoint checkpoint={'last_execution': 1708026001.525169} channel=applications type=<class 'dict'> It appears that the input is running but we're not seeing any events.  We also noted the following in the documentation for version 5.2.0. sourcetype SentinelOne API Description ...     sentinelone:channel:applications web/api/v2.1/installed-applications Deprecated sentinelone:channel:applications web/api/v2.1/installed-applications Deprecated Does this mean that the input has been deprecated? If so, what does the statement "Applications input uses a new S1 API endpoint to reduce load on ingest." in the release notes mean?  And why is the Applications channel still an option when creating inputs through the Splunk IU? Any information you can provide on the application channel would be greatly appreciated. __PRESENT
i have log like this : 2024-02-22 12:49:38:344 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Marshall University-Unimarket TxnType=Invoi... See more...
i have log like this : 2024-02-22 12:49:38:344 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Marshall University-Unimarket TxnType=Invoice TotalAmount=-1916.83 Status=Success 2024-02-22 11:51:12:992 EST| INFO |InterfaceName=USCUSTOMERINV INVCanonicalProcess Sender_ID=ThermoFisher Scientific Receiver_ID =Mammoth Bio via Coupa TxnType=Invoice TotalAmount=4190.67 Status=Success below query giving monthly total index="webmethods_qa" source="/apps/webmethods/integrationserver/instances/default/logs/USCustomerEDI.log" Status=success OR STATUS=Success OR Status=Failure USCUSTOMERINV | timechart sum(TotalAmount) span=1mon   but I need for each Receiver_ID how much invoice total for 1 months span like this :   how to do that?
Heavy forwarder or indexer crashes with FATAL error on typing thread.   Note: Issue is now fixed for next 9.2.2/9.1.5/9.0.10 patches   Crashing thread: typing_0 Backtrace (PIC build): [0x00007... See more...
Heavy forwarder or indexer crashes with FATAL error on typing thread.   Note: Issue is now fixed for next 9.2.2/9.1.5/9.0.10 patches   Crashing thread: typing_0 Backtrace (PIC build): [0x00007F192F4C2ACF] gsignal + 271 (libc.so.6 + 0x4EACF) [0x00007F192F495EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055E24388D6C0] ? (splunkd + 0x1A366C0) [0x000055E24388D770] ? (splunkd + 0x1A36770) [0x000055E2445D6D24] PipelineInputChannelReference::PipelineInputChannelReference(Str const**, PipelineInputChannelSet*, bool) + 388 (splunkd + 0x277FD24) [0x000055E2445BACC3] PipelineData::set_channel(Str const*, Str const*, Str const*) + 243 (splunkd + 0x2763CC3) [0x000055E2445BAF9E] PipelineData::recomputeConfKey(PipelineSet*, bool) + 286 (splunkd + 0x2763F9E) [0x000055E243E3689E] RegexExtractionProcessor::each(CowPipelineData&, PipelineDataVector*, bool) + 718 (splunkd + 0x1FDF89E) [0x000055E243E36BF3] RegexExtractionProcessor::executeMulti(PipelineDataVector&, PipelineDataVector*) + 67 (splunkd + 0x1FDFBF3) [0x000055E243BCD5F2] Pipeline::main() + 1074 (splunkd + 0x1D765F2) [0x000055E244C336FD] Thread::_callMainAndDiscardTerminateException() + 13 (splunkd + 0x2DDC6FD) [0x000055E244C345F2] Thread::callMain(void*) + 178 (splunkd + 0x2DDD5F2) [0x00007F192FF1F1CA] ? (libpthread.so.0 + 0x81CA) [0x00007F192F4ADE73] clone + 67 (libc.so.6 + 0x39E73)                   Crashing thread: typing_0 Backtrace (PIC build): [0x00007F192F4C2ACF] gsignal + 271 (libc.so.6 + 0x4EACF) [0x00007F192F495EA5] abort + 295 (libc.so.6 + 0x21EA5) [0x000055E24388D6C0] ? (splunkd + 0x1A366C0) [0x000055E24388D770] ? (splunkd + 0x1A36770) [0x000055E2445D6D24] _ZN29PipelineInputChannelReferenceC2EPPK3StrP23PipelineInputChannelSetb + 388 (splunkd + 0x277FD24) [0x000055E2445BACC3] _ZN12PipelineData11set_channelEPK3StrS2_S2_ + 243 (splunkd + 0x2763CC3) [0x000055E2445BAF9E] _ZN12PipelineData16recomputeConfKeyEP11PipelineSetb + 286 (splunkd + 0x2763F9E) [0x000055E243E3689E] _ZN24RegexExtractionProcessor4eachER15CowPipelineDataP18PipelineDataVectorb + 718 (splunkd + 0x1FDF89E) [0x000055E243E36BF3] _ZN24RegexExtractionProcessor12executeMultiER18PipelineDataVectorPS0_ + 67 (splunkd + 0x1FDFBF3) [0x000055E243BCD5F2] _ZN8Pipeline4mainEv + 1074 (splunkd + 0x1D765F2) [0x000055E244C336FD] _ZN6Thread37_callMainAndDiscardTerminateExceptionEv + 13 (splunkd + 0x2DDC6FD) [0x000055E244C345F2] _ZN6Thread8callMainEPv + 178 (splunkd + 0x2DDD5F2) [0x00007F192FF1F1CA] ? (libpthread.so.0 + 0x81CA) [0x00007F192F4ADE73] clone + 67 (libc.so.6 + 0x39E73) Last few lines of stderr (may contain info on assertion failure, but also could be old): Fatal thread error: pthread_mutex_lock: Invalid argument; 117 threads active, in typing_0                               Fatal thread error: pthread_mutex_lock: Invalid argument;                 This crash happens if persistent queue is enabled. It has been reported for several years. I see one reported back in 2015 as well. https://community.splunk.com/t5/Monitoring-Splunk/What-would-cause-a-Fatal-thread-error-in-thread-typing-found-in/m-p/261407 The bug existed always but the interesting part is, since 9.x the frequency of crashes has gone up. More customers are reporting the crashes now. The probability of hitting the race condition has gone up now. We are fixing the issue( internal ticket SPL-251434) for next patch, in the mean time here are few workarounds to consider depending on what is feasible for your requirement. The reason for 9.x high frequency of crashes on instance with persistent queue enabled is that the forwarders(UF/HF/IUF/IHF) are sending data at faster rate due to 9.x autoBatch, thus small in-memory part of persistent queue (default 500KB) makes it nearly impossible to not bring persistent queue part (writing on to disk) into play. Meaning now 9.x receiver with persistent queue is writing on to disk nearly all the time even when down stream pipeline queues are not saturated. So the best solution to bring crashing frequency to the level of 8.x or older is to increase  in-memory part of persistent queue ( so that if no down stream queues full not disk writes to persistent queue). However the fundamental bug still remains there and will be fixed in a patch.  The workarounds are reducing the possibility of disk writes for persistent queue. So have a look at the 3 possible workarounds and see which one works for you. 1. Turn off persistent queue on splunktcpin port( I sure not feasible for all). This will eliminate the crash. 2. Disable `splunk_internal_metrics`  app as it does source type cloning for metrics.log. Most of us probably not aware that metrics.log is cloned and additionally indexed into `_metrics` index. If you are not using `_metrics` index, disable the app. For crash to happen, you need two conditions    a) persistent queue   b) sourcetype cloning. 3. Apply following configs to reduce the chances of crashes. limits.conf  [input_channels] max_inactive=300001 lowater_inactive=300000 inactive_eligibility_age_seconds=120 inputs.conf, increase im-memory queue size of PQ( depending on ssl or non-ssl port) [splunktcp-ssl:<port>] queueSize=100MB [splunktcp:<port>] queueSize=100MB Enable Async Forwarding on HF/IUF/IHF (crashing instance)  4.  Slow down forwarders by setting `autoBatch=false` on all universal forwarders/heavy forwarders . 
My input module relies on API data and I have decided to move connection timeout configuration options over to global configuration e.g:   helper.get_global_setting("read_timeout")   Rather than ... See more...
My input module relies on API data and I have decided to move connection timeout configuration options over to global configuration e.g:   helper.get_global_setting("read_timeout")   Rather than requiring it be set individually per Input module. However it appears there is no `validate_input` functionality similar to the input module for global configuration options. There is no documentation for this, but you would think that being able to validate global config options inherited by every input module would be an important thing to do. I now have to figure out how to do this in each input module, but it delays telling the user they have made bad config where they enter it. I cannot rely on something like log info due to splunk cloud giving not much access to logs, so I'm more or less reliant on resetting the value or keeping them in input modules. Is there anyway this can be achieved?
Hello, We have been running Website monitoring for a while and just recently it started to continuously report Connection Time out errors on and off on the URL's we track.  We had checked the networ... See more...
Hello, We have been running Website monitoring for a while and just recently it started to continuously report Connection Time out errors on and off on the URL's we track.  We had checked the network, and no issues can be found. Is it possible Splunk or the Website monitoring add-in is corrupt? Any suggestions?
I can see the business transactions 0 of 10 happening but I can't see the transactions and the application flow map. What could be the reason for it?
Trying to blacklist an event that is generating a lot of logs. Previously asked this question here Solved: Re: Splunk Blacklist DesktopExtension.exe addition... - Splunk Community but the solution... See more...
Trying to blacklist an event that is generating a lot of logs. Previously asked this question here Solved: Re: Splunk Blacklist DesktopExtension.exe addition... - Splunk Community but the solution is not working. Any other thoughts on how to blacklist Desktopextension.exe for windows security events.      blacklist = EventCode=4673 message="DesktopExtension\.exe        
The Splunk Dashboard Examples App for SimpleXML will reach end of support on Dec 19, 2024, after which no new versions will be released and the app will be archived from Splunkbase. Check out this Sp... See more...
The Splunk Dashboard Examples App for SimpleXML will reach end of support on Dec 19, 2024, after which no new versions will be released and the app will be archived from Splunkbase. Check out this Splunk Lantern article to learn more.
Hello, How do i provide access to a limited email address on a dashboard through the backend 
I am trying to configure the distributed monitoring console without the UI (for automation purposes). It seems that I have gotten most things right - all instances show up the way I want them to, how... See more...
I am trying to configure the distributed monitoring console without the UI (for automation purposes). It seems that I have gotten most things right - all instances show up the way I want them to, however they are all marked as "unreachable". It seems that I must do the step where I provide credentials for the mc host to login to the monitored host. However I cannot figure out what this step actually does. Also I cannot find anything that hints to the credentials being stored anywhere. So what does this login process actually do, and how can I mimic that bevhaviour for the mc from the commandline/ via config files? Any insight on how the setup process works behind the scene would be appreciated.
I have an application that I am trying to monitor.  There is a specific event code for when the tool is opened to modify the tool (EventCode=250).  There is an EventCode for when it is closed (EventC... See more...
I have an application that I am trying to monitor.  There is a specific event code for when the tool is opened to modify the tool (EventCode=250).  There is an EventCode for when it is closed (EventCode=100).  These two codes display a user name, but the events between them do not.  How can I write a search to look for these two events then display the changes between them with the username who completed the change?   | from datamodel:P3 | search EventCode=250 OR 100 OR 70 OR 80 | eval user = coalesce(User, Active_User) | eval Event_Time=strftime(_time,"%m/%d/%y %I:%M:%S %P") | table Event_Time, host,user,Device_Added,Device_SN,Device_ID,EventCode, EventDescription Event_Time                        host              user      Device_Added      Device_SN       Device_ID      EventCode  02/22/24 08:49:44 am Test-Com   xxxxx                                                                                                 100 02/21/24 03:59:12 pm Test-Com   xxxxx                                                                                                  250 02/21/24 03:56:08 pm Test-Com   xxxxx                                                                                                  100 02/21/24 03:56:00 pm Test-Com                            USB 1                   12345          PID_1                   70  02/21/24 03:56:00 pm Test-Com                            USB 2                    6789            PID_2                   70  02/21/24 03:51:10 pm Test-Com                            USB 1                   12345          PID_1                   80   02/21/24 03:50:44 pm Test-Com     xxxxx                                                                                                  250
Hi, I am looking to grab all windows events of successful NTLM logins without using Kerberos. Here is my query so far.     "eventcode=4776" "Error Code: 0x0" ntlm   I think this is working as of ... See more...
Hi, I am looking to grab all windows events of successful NTLM logins without using Kerberos. Here is my query so far.     "eventcode=4776" "Error Code: 0x0" ntlm   I think this is working as of now, however it brings results including the value of Kerberos, I tried using the value, Not "Kerberos" , however it completely broke my search result.   I am looking to grab only the value of "Account Name:" and "Source Network Address:" then export it to a csv file every week.    Is this something I can do with Splunk? If so any help would be appreciated. Thanks.