All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

we are facing the disk space in HQ site with almost all the indexers with 95% disk space is fully utilized. Total disk space=10TB Indexes = 7.5 TB summary Index=1.5 500GB reserve for Splunk ... See more...
we are facing the disk space in HQ site with almost all the indexers with 95% disk space is fully utilized. Total disk space=10TB Indexes = 7.5 TB summary Index=1.5 500GB reserve for Splunk Operations. now management approves extra 3TB for each Indexer. my question is that: can we add 3TB as another partition/volume and move(entirely) less expensive indexes to new volume ? I need the detailed steps
i have been using appdynamics  for long time. But i have one question, which is not answered till date ,for example : if an application have 10 business transactions (urls), all these BTs have conn... See more...
i have been using appdynamics  for long time. But i have one question, which is not answered till date ,for example : if an application have 10 business transactions (urls), all these BTs have connection with each other.like when i click on UI/Login it hits another BT /home/page but when i see in my appdynamics console for app map. dashboards or application map, it just show me tier/application(all BTs under in it) role only connected with backends and other tiers/roles. how can i get a realtime map for all these BTs, which are communicating with eachother.if i will able to achieve it, i can get exact number of calls ,errors,response time between two BT from app map, i do not want to go into snapshots (that gives similar type of data for single call only). i am able to get it at some extent when applications are micro services , where we have each API/BT is running as a one jvm instance. Let me know your thoughts , reach out to me if you need more details on my query. thanks
I am developing a monthly report/dashboard for a client and would like to ask the client a lot of none technical questions about their requirement in other to develop this report. I specifically need... See more...
I am developing a monthly report/dashboard for a client and would like to ask the client a lot of none technical questions about their requirement in other to develop this report. I specifically need to ask them some “sizing and timing questions. For example: How often does the data change (continuous, daily, hourly, weekly etc) How many concurrent users may need to run this report? Guys I would appreciate if you can give me more sizing and timing questions that I can ask the client. Cheers Note: The nature of my clients business is all about security control of their system’s data and company’s information. They want to have a monthly report via dashboards that will show trends in their data. For example (identify people who have access to their data, who don’t have such access, how many failed logins etc… They want to compare their previous month’s report with the current month report and see any changes in the activities
I have records have 2 fields: phone number result 1111 success 2222 success 2222 failed 3333 success 3333 ... See more...
I have records have 2 fields: phone number result 1111 success 2222 success 2222 failed 3333 success 3333 failed 4444 failed How to get the phone number which got failed ONLY. in this example, I want to get "4444" if I search by result=failed, I got 2222,3333 and 4444. But I want to exclude 2222 and 3333 as they got "success" is there any quick way to do that ? thanks!
I note the author details reference a version that does not yet exist.. (https://splunkbase.splunk.com/apps/#/author/mimecast_4_splunk_integration) as there are number of known CIM compatibility ... See more...
I note the author details reference a version that does not yet exist.. (https://splunkbase.splunk.com/apps/#/author/mimecast_4_splunk_integration) as there are number of known CIM compatibility and field extraction issues we all hope that Mimecast would consider making this a public code project so members of the community to contribute directly.
Hi Ninjas, I have following sample events in splunk. [02/18/2020 10:47:15.1318] CAUAJM_I_40245 EVENT: CHANGE_STATUS STATUS: STARTING JOB: CFDW_ADHOC_C_AIMSAS_D_INV_LNITEM_BILLING_CHGS_M MACHINE... See more...
Hi Ninjas, I have following sample events in splunk. [02/18/2020 10:47:15.1318] CAUAJM_I_40245 EVENT: CHANGE_STATUS STATUS: STARTING JOB: CFDW_ADHOC_C_AIMSAS_D_INV_LNITEM_BILLING_CHGS_M MACHINE: XXXX [02/18/2020 10:48:15.1318] CAUAJM_I_40245 EVENT: CHANGE_STATUS STATUS: RUNNING JOB: CFDW_ADHOC_C_AIMSAS_D_INV_LNITEM_BILLING_CHGS_M MACHINE: XXXX [02/18/2020 18:25:15.1318] CAUAJM_I_40245 EVENT: CHANGE_STATUS STATUS: SUCCESS JOB: CFDW_ADHOC_C_AIMSAS_D_INV_LNITEM_BILLING_CHGS_M MACHINE: XXXX Now i need your help to calculate the total number of running/starting jobs for every 5 minutes , being the job status get hold by the splunk query and make it count in the timechart. For example, if i am running a query for total number of running jobs between 10:48 and 18:25 , the job name showing in the sample events should be included in the count. Your help is much appreciated.
I have something like below logged in as a message. How can i replace "This is my logfile ** ->" with empty and then how to extract name, startdate, dept, enddate, status, id and get the values. ... See more...
I have something like below logged in as a message. How can i replace "This is my logfile ** ->" with empty and then how to extract name, startdate, dept, enddate, status, id and get the values. This is my logfile ** -> myfulljson { name { value: "Test" } startdate { value: "2020-02-21" } dept { value: 110 } enddate { value: "20200220" } status { value: "finish" } id { value: "1234" } }
Anyone have TA for Symantec brightmail.
I'm trying to get a blacklisted log entry that works on Universal Forwarders to filter out specific event codes with user fields that end in $ in their value. What I have now, works on my test env... See more...
I'm trying to get a blacklisted log entry that works on Universal Forwarders to filter out specific event codes with user fields that end in $ in their value. What I have now, works on my test environment with uploaded sample logs, but not directly on the Universal Forwarder itself: blacklist1 = EventCode="(4624|4634)" user=".*\$" blacklist2 = EventCode="4672" Account_Name=".*\$" What can I do to get this right so it actually works? I know that in the event log, raw, the matching line actually is space indented and something like: ... Subject: Security ID: S-1-5-18 Account Name: something$ Account Domain: domain ... Thank you!
I have 2 situations to address.. 1. if no data in index for timeframe , create a blank row with "no data" and come out of query 2. if data found, then eval next steps , if result is 0 , then crea... See more...
I have 2 situations to address.. 1. if no data in index for timeframe , create a blank row with "no data" and come out of query 2. if data found, then eval next steps , if result is 0 , then create a blank row with "0" as data. can both of these be achieved in a single query. basically search index for data, if data not found, create "nodata" row, exit, else if data found, but no results on eval, then create "0" row ... hope i am clear with my question.
Currently the setup is likes this where i want to implement Workload Management ,so that the jobs need to balance all across Multisite Indexer Cluster. Infra:- 5nos of Search Head Cluster(Each ... See more...
Currently the setup is likes this where i want to implement Workload Management ,so that the jobs need to balance all across Multisite Indexer Cluster. Infra:- 5nos of Search Head Cluster(Each Search Head Cluster is of 6 nos Search Head Member) Multisite Indexer Cluster(2 site each consist of 75 nos of Search Peer) Can any one suggest me with this to how the jobs can be balance within Multisite Indexer cluster ,so that each site should not be overloaded.It will be good if is there is any sample config for workload pool configuration.
(Apologies in advance since I am not even sure what question to ask and how to ask it. I'll rewrite it once I get a better idea of how to ask it.) Grouping events via transaction correctly produ... See more...
(Apologies in advance since I am not even sure what question to ask and how to ask it. I'll rewrite it once I get a better idea of how to ask it.) Grouping events via transaction correctly produces multiple results for some fields: The problem is that certain standard functions such as color formatting (e.g. make "failed" cells red) and post-transaction filtering (e.g. search status!=success ) on that field no longer work. How do I remove the "started" value from the values in the status field? Or perhaps, how do I evaluate a new field such as last_status that is equal to the status value in the last event in the group? (I've looked at the related questions and Splunk docs and the solutions - mostly using mvexpand and similar commands, and couldn 't figure out how to extract single values out of what appears to be an array of them. The search: sourcetype="linux_messages_syslog" uuid="*" | transaction uuid | table _time duration eventcount status P.S. Please assume transaction is a must and I cannot use stats instead of it. Thank you!
Hello, I have a lookup file which has fields Month, earliest, latest. I have drop down name "Month" which gives me the list of all the months from the lookup table. When I choose a month from the ... See more...
Hello, I have a lookup file which has fields Month, earliest, latest. I have drop down name "Month" which gives me the list of all the months from the lookup table. When I choose a month from the drop down the respective values of the earliest and the latest should be passed to the searches or time range token in the dashboard. Month earliest latest Jan 01/15/2020:03:34:45 01/15/2020:05:34:45 Feb 02/15/2020:03:34:45 02/15/2020:01:34:45 Mar 03/15/2020:03:34:45 03/15/2020:07:34"45 Apr 04/15/2020:03:34:45 04/15/2020:08:34:45
I have a test environment with similar devices (Windows 10 laptops that serve the same purpose and have the same software) that I'm using to test the SSL configuration for the input-outputs of log re... See more...
I have a test environment with similar devices (Windows 10 laptops that serve the same purpose and have the same software) that I'm using to test the SSL configuration for the input-outputs of log receiving/forwarding between Universal Forwarders and one Indexer (both version 6.5). The issue is that for some devices, once the Universal Forwarder is installed and they point to the Indexer (which is set up as a Deployment Server as well) to pull the app that contains the testing SSL configuration for outputs.conf, certain devices do not hash the sslPassword when the UF restarts, while others do. So all devices are phoning home and are listed under Forwarder Management, but only some are sending logs, while the rest (that are not hashing the sslPassword) can't send any logs. What is going on?
Can someone please help with below error ? Splunk forwarder is failing with below error. ● splunk.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: ... See more...
Can someone please help with below error ? Splunk forwarder is failing with below error. ● splunk.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/splunk.service; enabled; vendor preset: disabled) Active: failed (Result: start-limit) since Fri 2020-02-21 14:11:39 PST; 785ms ago Process: 30472 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/%n (code=exited, status=0/SUCCESS) Process: 30469 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/%n (code=exited, status=0/SUCCESS) Process: 30468 ExecStart=/opt/splunk/splunkforwarder/bin/splunk _internal_launch_under_systemd (code=exited, status=1/FAILURE) Main PID: 30468 (code=exited, status=1/FAILURE) Feb 21 14:11:39 localhost systemd[1]: splunk.service: main process exited, code=exited, status=1/FAILURE Feb 21 14:11:39 localhost systemd[1]: Unit splunk.service entered failed state. Feb 21 14:11:39 localhost systemd[1]: splunk.service failed. Feb 21 14:11:39 localhost systemd[1]: splunk.service holdoff time over, scheduling restart. Feb 21 14:11:39 localhost systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. Feb 21 14:11:39 localhost systemd[1]: start request repeated too quickly for splunk.service Feb 21 14:11:39 localhost systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'. Feb 21 14:11:39 localhost systemd[1]: Unit splunk.service entered failed state. Feb 21 14:11:39 localhost systemd[1]: splunk.service failed.
I am planning on using the Splunk App for Web Analytics to provide information from a multiple webservers with multiple sites. When I exam the logs, it records each the sitename as W3SVC1, for examp... See more...
I am planning on using the Splunk App for Web Analytics to provide information from a multiple webservers with multiple sites. When I exam the logs, it records each the sitename as W3SVC1, for example. In the app, configuring the site (Setup - Websites) using the site as W3SVC1 it works properly. However if I change the Site to the URL mysite.contoso.com, I get the a Reason: Site in logs and in lookup mismatch. This might be ok if you want to override the default site name in the logs. Yes, I would like to override this so it makes more sense to end users, instead of multiple W3SVC1 listed. If I move forward with the URL, I can run Generate user sessions, but if I run Generate pages, it returns 0 results. Using W3SVC1 Generate user sessions and Generate pages works properly. Is there a way to override the Site in step 2 of Setup - Websites for the website configuration? I am using IIS logs and I did read that I can make one site log and then it will display this, but ideally I would rather have each site have its own logs. Alternatively, if someone knows of another web analytics app, I would be interested. Thank you!
Splunk isn't completely parsing the xml into fields in search results, only sections. For example, in the sample event below, the system and userdata sections are fields but the xml headers insid... See more...
Splunk isn't completely parsing the xml into fields in search results, only sections. For example, in the sample event below, the system and userdata sections are fields but the xml headers inside them are not parsed into fields (i.e. Username and IpAddress .) Based on some of what I've read here in the forums, I've already edited my props.conf for sourcetype=XmlWinEventLog but haven't seen any change. [source::XmlWinEventLog] KV_MODE=xml TRUNCATE = 0 I don't know what I'm missing and could use some help. (Hell, what I put in there, Splunk was probably already doing) Here's a sample event (I added line breaks to make it easier to read. Raw data in search results it's a single line): <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event" xml:lang="en-US"> <System> <Provider Name="Microsoft-Windows-TerminalServices-Gateway" Guid="{4D5AE6A1-C7C8-4E6D-B840-4D8080B42E1B}" /> <EventID>200</EventID> <Version>0</Version> <Level>4</Level> <Task>2</Task> <Opcode>30</Opcode> <Keywords>0x4020000001000000</Keywords> <TimeCreated SystemTime="2020-02-21T18:54:19.913701800Z" /> <EventRecordID>1219</EventRecordID> <Correlation ActivityID="{BEA11342-474B-47DE-907D-F2FBEBD40000}" /> <Execution ProcessID="5480" ThreadID="8416" /> <Channel>Microsoft-Windows-TerminalServices-Gateway/Operational</Channel> <Computer>gatewayserver.domain.com</Computer> <Security UserID="S-1-5-20" /> </System> <UserData> <EventInfo xmlns="aag"> <Username>domain\username</Username> <IpAddress>173.x.x.x</IpAddress> <AuthType>NTLM</AuthType> <Resource /> <ConnectionProtocol>HTTP</ConnectionProtocol> <ErrorCode>0</ErrorCode> </EventInfo> </UserData> <RenderingInfo Culture="en-US"> <Message>The user "domain\username", on client computer "173.x.x.x", met connection authorization policy requirements and was therefore authorized to access the RD Gateway server. The authentication method used was: "NTLM" and connection protocol used: "HTTP".</Message> <Level>Information</Level> <Task /> <Opcode /> <Channel /> <Provider /> <Keywords> <Keyword>Audit Success</Keyword> </Keywords> </RenderingInfo> </Event>
Helm deploy is successful. I checked related splunk pods fluentd are up and running: splunk-splunk-kubernetes-logging-v6gbg 1/1 Running 0 3m57s splunk-splunk-kube... See more...
Helm deploy is successful. I checked related splunk pods fluentd are up and running: splunk-splunk-kubernetes-logging-v6gbg 1/1 Running 0 3m57s splunk-splunk-kubernetes-metrics-agg-d7dc75b4c-pm6gv 1/1 Running 0 3m57s splunk-splunk-kubernetes-metrics-dpnpw 1/1 Running 0 3m57s splunk-splunk-kubernetes-objects-7c94bdccc-tlzzb 1/1 Running 0 3m57s LogA: 2020-02-21 20:07:05 +0000 [info]: #0 starting fluentd worker pid=17 ppid=6 worker=0 2020-02-21 20:07:05 +0000 [info]: #0 fluentd worker is now running worker=0 LogB: 2020-02-21 20:06:55 +0000 [info]: #0 listening port port=24224 bind="0.0.0.0" 2020-02-21 20:06:55 +0000 [warn]: #0 /var/log/kube-apiserver-audit.log not found. Continuing without tailing it. 2020-02-21 20:06:55 +0000 [info]: #0 fluentd worker is now running worker=0 LogC: 2020-02-21 20:07:06 +0000 [info]: #0 starting fluentd worker pid=16 ppid=6 worker=0 2020-02-21 20:07:06 +0000 [info]: #0 fluentd worker is now running worker=0 LogD: 2020-02-21 20:06:53 +0000 [info]: #0 starting fluentd worker pid=15 ppid=6 worker=0 2020-02-21 20:06:53 +0000 [info]: #0 fluentd worker is now running worker=0 But entities cannot be discovered on the splunk instance (on perm). Even tho there are some data already got forwarded: 21/02/2020 15:15:07.742 metric host = k8s-node1source = kube.node.tasks_stats.nr_io_waitsourcetype = httpevent I am currently blocked with this issue to use this app. any help is appreciated and thanks in advance
Dose Splunk have a RHEL OVA Package for UEBA? Ubuntu is not supported in our environment and the OVA is so much easier to install.