All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

splunk-8.1.0 server RHEL 8 system. So following the instructions from: https://docs.splunk.com/Documentation/Splunk/6.6.2/Forwarding/Routeandfilterdatad Route inputs to specific indexers based on ... See more...
splunk-8.1.0 server RHEL 8 system. So following the instructions from: https://docs.splunk.com/Documentation/Splunk/6.6.2/Forwarding/Routeandfilterdatad Route inputs to specific indexers based on the data input I configured the following: vi /opt/splunk/etc/apps/search/local/inputs.conf [monitor:///var/log/audit/audit.log] disabled = false index = abcd sourcetype = linux_audit host = smnloghost _TCP_ROUTING=monitoring_audit vi /opt/splunk/etc/system/local/outputs.conf [tcpout:monitoring_audit] server = <IP>:<PORT> type = tcp disabled = 0 /opt/splunk/bin/splunk cmd btool outputs list tcpout Shows: [tcpout:monitoring_audit] disabled = 0 server = 214.16.207.174:6514 type = tcp Restarted splunk No network connection to <IP>, just the rsyslog forwarding syslog data. netstat -natp | grep <IP> tcp 0 0 1<IP>:<PORT> <IP>:<PORT> ESTABLISHED 123313/rsyslogd How can I forward just the data input from the audit log?  
Hi everyone, I have a trouble with an app that we made in the Splunk Add On Builder, the app no longer loggs since we update to Splunk 8. This app contains a custom alert action that logs some infor... See more...
Hi everyone, I have a trouble with an app that we made in the Splunk Add On Builder, the app no longer loggs since we update to Splunk 8. This app contains a custom alert action that logs some information messages to the internal by using helper.log_info function in python. Do you have any idea about this issue? Where is located the AOB's helper library? Any suggestions? Thanks.
Hi, I have some data in my Splunk indexer (historical data) and I want to anonymize it now. Is there any better way to SCRUB it instead of deleting the data? I'm using  SE v7.1.7
hello, is there anyway to define a map / object. IE { '123': 'something',  '1234', 'anotherThing' } and then replace strings with '123' with 'something' and strings with '1234' with 'anotherThing'?
I've seen lots of script examples, but not an actual step by step process for using SCCM to install Universal Forwarders on Windows systems. I'm a beginner on splunk and SCCM. Looking to see if anyth... See more...
I've seen lots of script examples, but not an actual step by step process for using SCCM to install Universal Forwarders on Windows systems. I'm a beginner on splunk and SCCM. Looking to see if anything like that exists before I reinvent the wheel. Thanks
Hi guys, I have an input made from the ASplunk addon for AWS and what I want to do is to stop ingesting a field value. This is cloudflare data and what I want to stop ingesting is the field value WAF... See more...
Hi guys, I have an input made from the ASplunk addon for AWS and what I want to do is to stop ingesting a field value. This is cloudflare data and what I want to stop ingesting is the field value WAFAction=unknown Could I do this through the props and transforms? If so, how should I do it? Thanks in advance.   Regards,
I'm trying to do the following search based on my index 'transactions' and field name called 'customers' for a custom time range   Top 10 highest historical peak rates averaged over the following i... See more...
I'm trying to do the following search based on my index 'transactions' and field name called 'customers' for a custom time range   Top 10 highest historical peak rates averaged over the following intervals (1 sec, 10 sec, 60 sec, 5 min) Top 10 highest daily transaction counts
Hi  I have currently 5-6 index setup where consider abc as fieldname , which is extracted at index time and same fieldname is called with different name in different indexes like cde, efg . Now I  w... See more...
Hi  I have currently 5-6 index setup where consider abc as fieldname , which is extracted at index time and same fieldname is called with different name in different indexes like cde, efg . Now I  want to create common field name for all these different fieldnames extracted at indextime. -What would be best way to achieve it and I want to take advantage of index time extracted fields?   I checked CIM model but does not fit as per my data. so even if I created data model including all my index and then if I calculate fields to use one common name but it will not take any advantage of index time extracted fields.
Hi, Installing Enterprise 8.1.0 on Ubuntu 20.4 when unpacking get the following message. cp: cannot stat '/opt/splunk/etc/regid.2001-12.com.splunk-Splunk-Enterprise.swidtag': No such file or direct... See more...
Hi, Installing Enterprise 8.1.0 on Ubuntu 20.4 when unpacking get the following message. cp: cannot stat '/opt/splunk/etc/regid.2001-12.com.splunk-Splunk-Enterprise.swidtag': No such file or directory Where as, it is directly under /opt/splunk Duncan    
So I have an application in centos that monitors process creation and sends it to a remote syslog server which is also running the Universal Forwarder. The syslog is then forwarded to Splunk and I sp... See more...
So I have an application in centos that monitors process creation and sends it to a remote syslog server which is also running the Universal Forwarder. The syslog is then forwarded to Splunk and I split it using the pipe "|" delimiter. However, I found that many applications like awk and sed use "|" to filter which also messes up the field separation in splunk. Does anyone know how to work around this? This is when I'm in search.     Thanks
Hi guys, I'm using a Universal Forwarder in a remote location (different network) to try and get data into my Splunk instance. Both machines are Windows.  The necessary firewall rules are in place ... See more...
Hi guys, I'm using a Universal Forwarder in a remote location (different network) to try and get data into my Splunk instance. Both machines are Windows.  The necessary firewall rules are in place and the Universal Forwarder logs show that the UF is connected to my server. However, when I look at the logs on my Splunk server, I can see the 800706BA error 'The RPC server is unavailable'  How might I solve this problem? Both networks are independent of one another, so I'm not sure if this is an authentication issue or not. Any tips of one the next steps might be would be great appreciated. Many thanks!
Is it possible to run a search that will only include all the events for that day after a certain time? (using the time range picker to select the date only, so the time will be selected using the se... See more...
Is it possible to run a search that will only include all the events for that day after a certain time? (using the time range picker to select the date only, so the time will be selected using the search query) For example I am wanting the search to pick events after 8am for the day selected by the time range picker, 
Hi, I have a query like this: index=.... hostname=*  | eval field1=if(x="y",1,0) | eval field2=if(x="z",1,0) | stats sum(field1) as "field1" sum(field2) as "field2" by hostname The result is a ... See more...
Hi, I have a query like this: index=.... hostname=*  | eval field1=if(x="y",1,0) | eval field2=if(x="z",1,0) | stats sum(field1) as "field1" sum(field2) as "field2" by hostname The result is a column chart in which I have 2 columns for each hostname that represent the count of filed1 and field2. If I click on a bar (for example filed1 for an hostname), I want to open another custom dashboard that shows other details like ip,.... But in this second dashboard I don't have only the results for field1, but also filed2. I know I need a token to filter the result, but I don't know how. Can anyone help me? Thanks in advance
Hello All, I am new to Splunk and ran into my first wall when attempting to omit search results using tags. Any help on the following would be greatly appreciated. Use Case: We want to be able to... See more...
Hello All, I am new to Splunk and ran into my first wall when attempting to omit search results using tags. Any help on the following would be greatly appreciated. Use Case: We want to be able to hide/omit/delete certain events when needed. (i.e. there is an infrastructure issue that results in a days worth of events being erroneous and we do not want to include this data for historical reporting, alerting, etc. as it will throw off certain calcs like the 90 day average) Constraints: Our Splunk admin said that they will not "delete" events except under certain circumstances (so we cannot use "|delete" as an option) Tentative Solution: We plan to adopt the use of a tag "snooze" that we will use to suppress such "bad data" in all our searches/reports/alerts/etc. Issue: The following works IF the tag "snooze" exists on at least one event ('Splunk search'  tag!=snooze) - this correctly returns all events that do not have a "snooze" tag on any fields. However, if NO events exist with the "snooze" tag then the search returns no results (when it should instead be returning ALL events). Is there a way to change the evaluation so that all events are returned when a given tag doesn't currently exist? The likely scenario is that we apply the tag on a given record and that record may age out which would cause all our reports to break. Additionally, if there is a better approach to what we are trying to achieve I am all ears. Thank you, Splunk-a-tron
Dear Community,   I have the following situation: Outbound= Client -> Proxy -> IDS -> Internet Inbound (Return Traffic) = Internet -> IDS -> Proxy -> Client   I'm getting logs from both the Pro... See more...
Dear Community,   I have the following situation: Outbound= Client -> Proxy -> IDS -> Internet Inbound (Return Traffic) = Internet -> IDS -> Proxy -> Client   I'm getting logs from both the Proxy and the IDS.   In essence, I would like to be able to correlate information from the Proxy and the IDS so I have correct IP information about the connection (Since the proxy changes the IP header, I lose visibility on the IDS and get a lot of false positives)   For Outbound, I would like to get the source IP from the Proxy and the destination from the IDS (Outbound = SrcPROXY - DstIDS) into one single row.   For Inbound, I would need to get the source from the IDS and the destination from the Proxy (Inbound = SrcIDS – DstPROXY) into one single row.   Ideally, I would like to create a transaction that displays the flow (outbound-inbound) into one single row for better visibility and incident response.  https://ibb.co/fM6V5D2   Example of logs: IDS log example Nov 17 07:11:30 10.x.x.x 2020-11-17T12:11:56Z 01 %IDS-6-******, SrcIP: 192.168.10.10, DstIP: 208.x.x.x, SrcPort: 23116, DstPort: 443, Protocol: tcp, IngressInterface: inside, EgressInterface: outside, IngressZone: inside, EgressZone: outside,  Client: SSL client, ApplicationProtocol: HTTPS, ConnectionDuration: 0, InitiatorPackets: 0, ResponderPackets: 0, InitiatorBytes: 2721, ResponderBytes: 4693, URL: https://**     Destination_IDS = DstIP: 208.x.x.x     host = 10.10.10.10     source = splunk-ids     sourcetype = ids_Syslog   Proxy Logs   11/17/20 8:16:01.000 AM Nov 17 08:16:01 10.x.x.x Nov 17 13:16:29 proxy_hostname Splunk_S: Info: 1605618979.572 170133 CLIENT_IP_ADDRESS TCP_MISS/200 7462 CONNECT tunnel://safebrowsing.googleapis.com:443/ "(Unauthenticated)CLIENT_IP_ADDRESS" DIRECT/safebrowsing.googleapis.com - OTHER-NONE-PassiveAuth_or_NTLM-NONE-NONE-NONE-DefaultGroup-NONE Source_proxy = "(Unauthenticated)10.x.x.x" host = 10.x.x.x source = splunk-proxy sourcetype = tools:proxy:squid I would highly appreciate any guidance, help, suggestion, comment. Thanks! Aka 2.12.0.0 2.12.0.0
Please help create a Regex that will only take the 4 characters/number after MTCP from below events? For example below, the regex should pick up YUM3, WOPP, WOLG. /XXX/XXXXX/XXXX/XXXXX/MFG_MTCPYUM3... See more...
Please help create a Regex that will only take the 4 characters/number after MTCP from below events? For example below, the regex should pick up YUM3, WOPP, WOLG. /XXX/XXXXX/XXXX/XXXXX/MFG_MTCPYUM3..XX_YUM3_XXXX /XXX/XXXXX/XXXX/XXXXX/MFG_MTCPWOPP..XX_WOPP_XXXX /XXX/XXXXX/XXXX/XXXXX/MFG_MTCPWOLG..XX_WOLG_XXXX Thanks
Hi all, I received the following error when I try to apply adaptive threshold in one KPI of one service where I enabled the backfill: "The following problems were encountered: insufficient data in ... See more...
Hi all, I received the following error when I try to apply adaptive threshold in one KPI of one service where I enabled the backfill: "The following problems were encountered: insufficient data in ITSI summary index for policies ['Weekdays, 4 PM', 'Weekdays, 9 AM', 'Weekdays, 2 PM', 'Weekdays, 12 PM', 'Weekdays, 10 AM', 'Weekdays, 3 PM', 'Weekdays, 7 AM', 'Weekdays, 8 AM', 'Weekdays, 1 PM', 'Weekdays, 6 PM', 'Weekdays, 5 PM', 'Weekdays, 11 AM']" even though I know it should be data in the system. If I search by the index itsi_summary I see only information from the moment when the backfill was activated (1 day ago) and not from the last 7 days. I've also checked I have the rights for it. What else could I check to troubleshoot this issue? Thanks in advance!
Hello,  I'm trying to get a few things from my tstats search: count for last hour count for yesterday Use the two counts to get % change Typically I'd do a nested eval statement to get the inf... See more...
Hello,  I'm trying to get a few things from my tstats search: count for last hour count for yesterday Use the two counts to get % change Typically I'd do a nested eval statement to get the info but it does not work with tstats:   | eval lastHours = relative_time(now(),"-h@h") | eval yesterday = relative_time("-1d@d","-2d@d") | stats count(yesterday) as yesterday count(lastHours) as lastHours by user src_ip | eval ChangePercent = (lastHours - yesterday) / 100     How would I get the info above with tstats?   | tstats `summariesonly` values(All_Traffic.src_zone) AS src_zone, values(All_Traffic.dest_ip) AS dest_ip, values(All_Traffic.dest_zone) AS dest_zone, values(All_Traffic.dest_port) AS dest_port, values(All_Traffic.rule) AS rule, values(All_Traffic.app) AS app, values(sourcetype) as event_source from datamodel=Network_Traffic.All_Traffic where All_Traffic.action="allowed" AND (earliest=-2d@d latest=now) by All_Traffic.user, All_Traffic.src_ip | `drop_dm_object_name("All_Traffic")`    
Is it possible to add a stanza field to outputs.conf on a light forwarder to add a delimiter to data that currently has none? I realise that if sending data (in this case, zeek logs), directly to Spl... See more...
Is it possible to add a stanza field to outputs.conf on a light forwarder to add a delimiter to data that currently has none? I realise that if sending data (in this case, zeek logs), directly to Splunk this would work, but my data is going somewhere else first then to Splunk, which is why I need the data to be delimited before it's sent. Can it all be done in outputs.conf, or even can I add props.conf and transforms.conf on the light forwarder?
I'm trying to schedule a particular alert to run on the first Monday of each fiscal quarter using this cron expression:   0 9 1-7 2,5,8,11 1   My reading of this is "9:00am on the first Monday of... See more...
I'm trying to schedule a particular alert to run on the first Monday of each fiscal quarter using this cron expression:   0 9 1-7 2,5,8,11 1   My reading of this is "9:00am on the first Monday of Feb, May, Aug, and Nov". However, with this month being November (11) for some reason it is running it every Monday. It unexpectedly ran this past Mon Nov 16th and has a "next scheduled time" of Mon Nov 23rd. Given the day-of-month restriction (3rd field) of 1-7 I would not have expected this to happen. Any advice appreciated. Splunk Enterprise 8.0.6. Thanks.