Activity Feed
- Posted App Updates on All Apps and Add-ons. 10-27-2021 08:39 AM
- Posted Re: Upgrade Failing with OSError type 28 on Splunk Enterprise Security. 10-05-2021 05:51 AM
- Posted Why is pgrade Failing with OSError type 28? on Splunk Enterprise Security. 10-05-2021 05:21 AM
- Posted Re: Per Index Configuration on Getting Data In. 08-05-2021 01:54 PM
- Posted Per Index Configuration on Getting Data In. 08-05-2021 12:45 PM
- Posted Re: Fire Brigade Not Working on All Apps and Add-ons. 08-04-2021 03:15 PM
- Posted Fire Brigade Not Working on All Apps and Add-ons. 08-04-2021 12:30 PM
- Posted TA-MS-defender no incident logs on All Apps and Add-ons. 06-25-2021 09:41 AM
- Posted Re: Monitoring Console shows all SHC members with the same instance name on Monitoring Splunk. 03-15-2021 11:25 AM
- Posted Re: Monitoring Console shows all SHC members with the same instance name on Monitoring Splunk. 03-15-2021 08:16 AM
- Posted Monitoring Console shows all SHC members with the same instance name on Monitoring Splunk. 03-15-2021 06:50 AM
- Posted Windows TA- Why are we not seeing any data in Splunk? on All Apps and Add-ons. 03-09-2021 06:53 AM
- Tagged Windows TA- Why are we not seeing any data in Splunk? on All Apps and Add-ons. 03-09-2021 06:53 AM
- Posted Re: User Mapping on Splunk Enterprise. 02-15-2021 12:17 PM
- Posted User Mapping on Splunk Enterprise. 02-15-2021 09:41 AM
- Tagged User Mapping on Splunk Enterprise. 02-15-2021 09:41 AM
- Got Karma for Migrate ES standalone to Cluster. 01-13-2021 12:55 PM
- Posted Syslog Output missing header on Getting Data In. 12-22-2020 10:49 AM
- Got Karma for Migrate ES standalone to Cluster. 10-01-2020 04:51 PM
- Posted Migrate ES standalone to Cluster on Splunk Enterprise Security. 06-29-2020 07:40 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
2 |
10-27-2021
08:39 AM
Hello All, On my ADHOC search head, I use to be able to go see all the apps installed and see what apps that needed to be updated. I do not see that that anymore. I am not sure why I do not see them anymore. Any ideas? thanks ed
... View more
Labels
- Labels:
-
configuration
-
troubleshooting
-
upgrade
10-05-2021
05:51 AM
I found this article https://community.splunk.com/t5/Getting-Data-In/Tutorial-data-upload-error/m-p/214891 I reduce the setting as stated to 4800 and the upgrade proceeded just fine. Weird.
... View more
10-05-2021
05:21 AM
Hello All,
I am testing the upgrade from ES 6.2.0 to 6.6.2. When I do the upgrade it fails with OSError type 28 no space left of device. But there is almost 30GB of disk space free.
2021-10-04 19:18:28,028 INFO [615bb5deed7f2dc4595650] _cplogging:216 - [04/Oct/2021:19:18:28] HTTP
Request Headers:
Remote-Addr: 127.0.0.1
TE: chunked
HOST: splunk-sh1.wv.mentorg.com:8000
ACCEPT-ENCODING: gzip, br
CACHE-CONTROL: max-age=0
SEC-CH-UA: "Google Chrome";v="93", " Not;A Brand";v="99", "Chromium";v="93"
SEC-CH-UA-MOBILE: ?0
SEC-CH-UA-PLATFORM: "Windows"
UPGRADE-INSECURE-REQUESTS: 1
ORIGIN: null
USER-AGENT: Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36
ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
SEC-FETCH-SITE: same-origin
SEC-FETCH-MODE: navigate
SEC-FETCH-USER: ?1
SEC-FETCH-DEST: document
ACCEPT-LANGUAGE: en-US,en;q=0.9
COOKIE: splunkweb_csrf_token_8000=[REDACTED]5649; session_id_8000=[REDACTED]5b74; token_key=[REDACTED]5649; experience_id=[REDACTED]b0c2; splunkd_8000=[REDACTED]tgchx
REMOTE-USER: admin
X-SPLUNKD: SKdIpkhtf8PlfUDwvOLunA== 11626949294704615649 ijbs1HY^4Ms541EE5sF6eqHg^iyD5t6QKZRByWhdMDXkj546^eB1lT6y59b9LewgHbLcz0Xa5SKotHijcl__zWhYqh8MZISrCqYVxuLkY7jijwyyXijSUQ9VAJRlcQA3o7tgchx 0
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryO0HdVIPxgJr5HUZN
Content-Length: 675766277
2021-10-04 19:18:28,029 INFO [615bb5deed7f2dc4595650] error:333 - POST /en-US/manager/appinstall/_upload 127.0.0.1 8065
2021-10-04 19:18:28,029 INFO [615bb5deed7f2dc4595650] error:334 - 500 Internal Server Error The server encountered an unexpected condition which prevented it from fulfilling the request.
2021-10-04 19:18:28,029 ERROR [615bb5deed7f2dc4595650] error:335 - Traceback (most recent call last):
File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 628, in respond
self._do_respond(path_info)
File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cprequest.py", line 680, in _do_respond
self.body.process()
File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 982, in process
super(RequestBody, self).process()
File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 559, in process
proc(self)
File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 225, in process_multipart_form_data
process_multipart(entity)
File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 217, in process_multipart
part.process()
File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 557, in process
self.default_proc()
File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 717, in default_proc
self.file = self.read_into_file()
File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 732, in read_into_file
self.read_lines_to_boundary(fp_out=fp_out)
File "/opt/splunk/lib/python3.7/site-packages/cherrypy/_cpreqbody.py", line 702, in read_lines_to_boundary
fp_out.write(line)
OSError: [Errno 28] No space left on device
As you can see there should be plenty of room for a 670MB upload
splunk@splunk-sh1:~/var/log/splunk> df -kh /opt/splunk
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/system-splunk 74G 44G 27G 63% /opt
splunk@splunk-sh1:~/var/log/splunk>
Web.conf
splunk@splunk-sh1:~/var/log/splunk> more ~/etc/system/local/web.conf
[settings]
login_content = <h1> <CENTER>Splunk Dev Search Head</CENTER> </h1>
max_upload_size = 1024
enableSplunkWebSSL = 1
privKeyPath = /opt/splunk/etc/auth/splunkweb/com.key
caCertPath = /opt/splunk/etc/auth/splunkweb/expJun2022.crt
splunkdConnectionTimeout = 1400
tools.sessions.timeout = 180
sslVersions = ssl3,tls
cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH
splunk@splunk-sh1:~/var/log/splunk>
So I am confused why it would say that there is no space left of the device.
Thanks
ed
... View more
Labels
- Labels:
-
troubleshooting
-
upgrade
08-05-2021
01:54 PM
I did forget to add the following: [default]
frozenTimePeriodInSecs = 10368000
homePath.maxDataSizeMB = 3000000
coldPath.maxDataSizeMB = 10598400
[volume:_splunk_summaries]
path = /splunk/cold/splunk_summaries
[volume:hot]
path = /splunk/hot
maxVolumeDataSizeMB = 3400000
[volume:cold]
path = /splunk/cold
maxVolumeDataSizeMB = 10957620 This is where I get confused. We have a total amount of storage of 68TB of hot storage divided among the 20 indexers. So each indexer has a 3.4TB volume. And we have 220TB of cold storage with each indexer having a 11TB. I gave the default value for the homePath 3TB with 400GB of extra room and I gave coldPath 10.5TB with 500GB of extra room. But if from my example 90 days of hot/warm data for wineventlog is 1.95TB does Splunk divide that automatically between all 20 indexers and the applies homePath of 1.95TB to the total amount of data across all 20 indexers?
... View more
08-05-2021
12:45 PM
Hello All, I am trying to clean up our indexes and their sizes to ensure that we are keeping the correct amount of data for each index. I have about 5 to 10 really busy indexes that bring in most of the data. pan_logs ~200GB/day syslog ~10GB/day checkpoing (coming soon) ~250GB/day wineventlog ~650GB/day network ~180GB/day So question is if when I create an index configuration for example wineventlog [wineventlog]
homePath = volume:hot/wineventlog/db
homePath.maxDataSizeMB = 19500000
coldPath = volume:cold/wineventlog/colddb
coldPath.maxDataSizeMB = 58500000
thawedPath = /splunk/cold/wineventlog/thaweddb
maxHotBuckets = 10
maxDataSize = auto_high_volume
maxTotalDataSizeMB = 78000000
disabled = 0
repFactor=auto So 30 days of hot/warm would be 1.95TB and 90days of cold data would be 5.85TB and the total size would be 78TB data. The sizes would then be divided by the total number of indexers we have (20) and each indexer should host about 975GB of hot/warm and 2.925TB of cold data. And Splunk would start to roll data to frozen (dev null) when the max total (Hot/Warm + Cold) data reached 78TB. Is that correct? Do I need to specify maxTotalDataSizeMB if I am using homePath and coldPath settings? Thanks ed
... View more
08-04-2021
03:15 PM
I am logged in and running as admin with permissions to all indexes, so I do not think that is the issue.
... View more
08-04-2021
12:30 PM
Hello All, I have Fire Brigade TA v2.0.4 installed on all my indexers in my 20 node cluster. I have the app installed on my DMC host. I do did the default configuration, which is to allow the saved search to populated the "monitored_indexes.csv" file on all the indexers. When I bring up the app and start to research the indexes I only see about 20 indexes in the Fire Brigade app. Splunk monitoring counsole says there are a total of 91 (internal and non-internal). So the configuration is quite simple: TA installed on all indexers in a 20 node cluster App installed on DMC TA is not installed on DMC search head and is not installed on the cluster master. From what I can tell it should just work. It has been installed for months and I still can not get it to recognize all the indexes we have in our environment. Ideas? thanks Ed
... View more
Labels
- Labels:
-
configuration
-
installation
-
troubleshooting
06-25-2021
09:41 AM
Hello All, I have configured TA-MS-defender and we are collecting ATP logs just fine. But the Incident logs keep giving me the following error: 2021-06-25 09:36:35,832 ERROR pid=4306 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events.
Traceback (most recent call last):
File "/opt/splunk/etc/apps/TA-MS_Defender/bin/ta_ms_defender/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events
self.collect_events(ew)
File "/opt/splunk/etc/apps/TA-MS_Defender/bin/microsoft_365_defender_incidents.py", line 72, in collect_events
input_module.collect_events(self, ew)
File "/opt/splunk/etc/apps/TA-MS_Defender/bin/input_module_microsoft_365_defender_incidents.py", line 69, in collect_events
incidents = azutil.get_atp_alerts_odata(helper, access_token, incident_url, user_agent="M365DPartner-Splunk-M365DefenderAddOn/1.3.0")
File "/opt/splunk/etc/apps/TA-MS_Defender/bin/azure_util/utils.py", line 57, in get_atp_alerts_odata
raise e
File "/opt/splunk/etc/apps/TA-MS_Defender/bin/azure_util/utils.py", line 40, in get_atp_alerts_odata
r.raise_for_status()
File "/opt/splunk/etc/apps/TA-MS_Defender/bin/ta_ms_defender/aob_py3/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://api.security.microsoft.com/api/incidents?$filter=lastUpdateTime+gt+2000-01-01T00:00:00Z Any ideas or help? thanks ed
... View more
Labels
- Labels:
-
configuration
-
installation
-
troubleshooting
03-15-2021
11:25 AM
But why did it change?
... View more
03-15-2021
08:16 AM
@Vardhan Here is the output from the hosts: splunk@splk-es-app-01:~> splunk show servername
Your session is invalid. Please login.
Splunk username: admin
Password:
Server name: splk-es-app-01
splunk@splk-es-app-01:~> splunk show default-hostname
Default hostname for data inputs: splk-es-app-03.
splunk@splk-es-app-01:~>
splunk@splk-es-app-02:~> splunk show servername
Your session is invalid. Please login.
Splunk username: admin
Password:
Server name: splk-es-app-02
splunk@splk-es-app-02:~> splunk show default-hostname
Default hostname for data inputs: splk-es-app-03.
splunk@splk-es-app-02:~>
splunk@splk-es-app-03:~> splunk show servername
Your session is invalid. Please login.
Splunk username: admin
Password:
Server name: splk-es-app-03
splunk@splk-es-app-03:~> splunk show default-hostname
Default hostname for data inputs: splk-es-app-03.
splunk@splk-es-app-03:~> It appears that all of them have the same default-hostname. thanks ed
... View more
03-15-2021
06:50 AM
Hello All I added our ES SHC to our monitoring console and the instance(host) name is all the same for all 3 search head cluster nodes. The instance host (servername) are all unique. How do I resolve this issue? thanks ed
... View more
Labels
03-09-2021
06:53 AM
Hello All,
I am trying to ingest some Azure data from our DCs. I have the following two stanzas added to our Splunk_TA_windows inputs.conf and we still do not see any data and do not see any errors from any of the hosts that have the Azure data.
[WinEventlog://Microsoft-AzureADPasswordProtection-DCAgent/Admin]
index = wineventlog
disabled = 0
renderXml=true
[WinEventlog://Microsoft-AzureADPasswordProtection-DCAgent/Operational]
index = wineventlog
disabled = 0
renderXml=true
Not sure why we are not seeing any data in Splunk. The AD admin says he sees logs on the host but not in Splunk. So to me it seems that Splunk is not ingesting the data and I am lost as to why.
Thanks
... View more
Labels
- Labels:
-
configuration
-
troubleshooting
02-15-2021
12:17 PM
authorize.conf is used for mapping capabilities to roles https://docs.splunk.com/Documentation/Splunk/8.1.2/Admin/authorizeconf
... View more
02-15-2021
09:41 AM
Hello All, I am trying to find where a user is getting mapped to a role. I can see that the user is mapped to the power role in the webui, but I do not see the user being mapped there in /opt/splunk/etc/system/local/authentication.conf. So what am I missing? Also there is nothing in /opt/splunk/etc/apps/* that would map the user to the power role. Thoughts? thanks ed
... View more
- Tags:
- authentication
- roles
Labels
- Labels:
-
administration
-
configuration
12-22-2020
10:49 AM
Hello All I found a similar question but did not see an answer. https://community.splunk.com/t5/Getting-Data-In/No-time-or-host-in-forwarded-syslog-messages/m-p/52627 I am forwarding Checkpoint logs that are coming in via tcp://514 and I am trying to forward the data to an HA syslog-ng environment. There is a NetScaler in front two different syslog-ng servers with round robin load balancing happening. I disabled the second syslog-ng host so that all logs get sent to sys-01. I see the following coming in: Msg: 2020-12-22 18:30 host-blah-blah.xxx.xxx.xxx.com time=1608661800|hostname=logger|product=Firewall|layer_name=xx-stl-private Security|layer_uuid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|match_id=197|parent_rule=0|rule_action=Accept|rule_uid=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|action=Accept|conn_direction=Internal|ifdir=inbound|ifname=eth2-01.716|logid=0|loguid={0x00000000,0x00,0x0000000,0xc0000000}|origin=xxx.xxx.xxx.xxx|originsicname=blah_gw-stl-prv|sequencenum=199|time=1608661800|version=5|dst=xxx.xxx.xxx.xxx|log_delay=1608661800|proto=6|s_port=47298|service=7031|src=xxx.xxx.xxx.xxx| From the previous link that seems to be a bug, but I am going to assume that it is an old bug and should not exist in Splunk version 8.0.6. Is there a way in the outputs.conf to force a header that has the hostname? Thanks ed
... View more
06-29-2020
07:40 AM
2 Karma
I have a couple of questions about migrating the ES standalone search head to a clustered search head. I have tested two different methods and one works well and the Splunk method doesn’t work. Here is my method. My Method: Create tarball backups of /opt/splunk/etc/apps and /opt/splunk/etc/users from the standalone ES search head Create a kvstore backup from standalone ES backup Build deployer and search head cluster Setup ldap Setup licensing Extract users.tar in /opt/splunk/etc/shcluster on deployer Extract apps.tar in /opt/splunk/etc/shcluster on deployer Remove all apps that are default install apps Apply shcluster bundle Restore kvstore on captain Restore store kvstore on other two nodes in the cluster Restore entire sh cluster Done Team tested and everything looks like it is there and seems to be working Splunk’s Method: Create tarball backups of /opt/splunk/etc/apps and /opt/splunk/etc/users from the standalone ES search head Create kvstore backup from standalone ES Build out deployer and search head cluster Setup ldap Setup licensing Install latest version ES on deployer Install all TA that will be used Deploy ES out to cluster Restore users.tar in /opt/splunk/etc/shcluster Restore other apps in /opt/splunk/etc/shcluster This process is more tedious as have to break out all items from ES app individually Deploy apps and users out to shcluster Restore kvstore on captain Restore kvstore on the other two nodes Restart sh cluster Done Team tested and it seems to be working but none of the datamodels are working and does not appear to recognize the kvstore restore.
... View more
Labels
- Labels:
-
using Enterprise Security
05-20-2020
11:47 AM
Hello All
I have a time prefix question
Here is my timestamp
May 20 10:59:30 svr-orw-nac-01 2020-05-20 17:59:30,646
May 20 11:01:01 svr-ies-nac-02 2020-05-20 18:01:01,389
I am setting props.conf to be the following:
[source::/var/log2/gns/nac/log_*]
MAX_TIMESTAMP_LOOKAHEAD = 31
TIME_PREFIX = ^\w+\s\d+\s\d+:\d+:\d+\ssvr-.*-nac-\d[01|02]\s
TIME_FORMAT = %Y-%m-%d %H:%M:%S,%3N
Does this look right?
Thanks
ed
... View more
05-18-2020
09:03 AM
Hello,
I am assuming that you are referring to using props and transforms to change the sourcetype. Am I wrong?
So I would use the current sourcetype in props.conf
[stream:netflow]
TRANSFORMS-set_sourcetype = set_netscaler
Then I would setup the transforms.conf
[set_netscaler]
FORMAT = sourcetype::citrix_netscaler_netflow
DEST_KEY = MetaData:Source
But that would change the sourcetype for all data that comes in via the original sourcetype stream:netflow.
Thanks
ed
... View more
05-13-2020
04:59 PM
I created a stream for netflow and the sourcetype comes in as stream:netflow. Is there a way to change the sourcetype prior to it being ingested into splunk
thanks
ed
... View more
05-08-2020
10:33 AM
Hello All,
We were using Splunk_TA_ipfix to collect the NetScaler Appflow logs and send them to our index cluster. With the release of Splunk_TA_citrix_netscaler 7.0.1, it states to collect Appflow logs using Splunk Stream. I am not sure what I am doing wrong. Here is my distributed environment:
2 Non-Clustered ADHOC SH
1 Non-Clustered ES SH
13 Node Index cluster
I installed the NetScaler TA on all SHs and all indexers
I installed Stream one of my ADHOC SH that is not busy
I installed Stream TA on a heavy forwarder that was configured to receive data Appflow data when ipfix TA was installed.
Splunk_TA_stream configuration files:
streamforward.conf :
[streamfwd]
netflowReceiver.0.ip = 0.0.0.0
netflowReceiver.0.port = 4739
netflowReceiver.0.protocol = udp
netflowReceiver.0.decoder = netflow
inputs.conf :
[streamfwd://streamfwd]
splunk_stream_app_location = https://adhoc_sh_1:8000/en-us/custom/splunk_app_stream/
stream_forwarder_id =
disabled = 0
I do not see any data being forwarded to the ad hoc SH nor do I see any data being sent to the indexers for the NetScaler appflow sourcetype. The instructions for collect IPFIX/APPFLOW are as about as clear as mud on a moonless night on a cloudy night in the middle of winter. I know I do not have the inputs setup properly and I am not sure what else I have wrong. Any help would be greatly appreciated.
Thanks,
Ed
... View more
03-05-2020
09:51 AM
4 Karma
Will the current version of NeApp Ontapp 2.1.91 work with Splunk Enterprise 8.0.2? And since I am doing the upgrade, will the NetApp TA 2.1.91 work as well?
Thanks
ed
... View more
02-19-2020
05:28 AM
Is it possible to use multiple wildcards in the host:: stanza in the props.conf file?
[host::svr-*-blah-*]
TRANSFORMS-remove = remove_stuff
So we are trying to remove stuff from multiple hosts in different geographical locations that have very similar names
svr-us-blah-01
svr-us-blah-02
svr-us-blah-03
svr-eur-blah-01
svr-eur-blah-02
svr-eur-blah-03
svr-pac-blah-01
svr-pac-blah-02
svr-pac-blah-03
Each host will collect very similar logs and then forward the logs to Splunk, but we want to dump the noise, so I was hoping that I could just use the [host::svr--blah-] stanza to apply the same props/transforms to each host for dumping the noise.
Will that work?
thanks
ed
... View more
12-03-2019
12:57 PM
Hello All,
I have internal private certs for our Splunk environment. Currently after I install a UF on Windows or Linux I have to edit the etc\system\local\server.conf file to change the sslkeysfilepassword. If I do not change password it will never check in with the deployment server. Is there a way to set the sslkeysfilepassword at the time of installation?
thanks
ed
... View more
12-02-2019
03:07 PM
Hello All,
I am following the instructions to download the TAs so that I can install on my indexers but do not see the download package
https://docs.splunk.com/Documentation/ES/6.0.0/Install/InstallTechnologyAdd-ons
I do not see anywhere to select the package to download
Thanks
ed
... View more
11-20-2019
09:50 AM
Hello All,
I am working on tuning the Network-Unroutable Host Activity -Rule search and we are trying to exclude our VPN networks and DNS hosts that are sending data to 192.0.0.0 from the search. I thought I could run a loop for every IP address of our dns servers. Here is what I added to the fully blown out search but it does not seem to be working like I thought it would.
| tstats prestats=true local=false summariesonly=true allow_old_summaries=true count,values(sourcetype) from datamodel=Network_Traffic.All_Traffic where All_Traffic.action=allowed by All_Traffic.src,All_Traffic.dest
| rename "All_Traffic.*" as "*"
| tstats prestats=true local=false summariesonly=true allow_old_summaries=true append=true count,values(sourcetype) from datamodel=Intrusion_Detection.IDS_Attacks by IDS_Attacks.src,IDS_Attacks.dest
| rename "IDS_Attacks.*" as "*"
| tstats prestats=true local=false summariesonly=true allow_old_summaries=true append=true count,values(sourcetype) from datamodel=Web.Web by Web.src,Web.dest
| rename "Web.*" as "*"
| stats count,values(sourcetype) as sourcetype by src,dest
| lookup local=true bogonlist_lookup_by_cidr ip AS "src" OUTPUTNEW is_bogon AS "src_is_bogon",is_internal AS "src_is_internal"
| lookup local=true bogonlist_lookup_by_cidr ip AS "dest" OUTPUTNEW is_bogon AS "dest_is_bogon",is_internal AS "dest_is_internal"
| search (NOT dest=169.254.* NOT src=169.254.* ((dest_is_internal!=true dest_is_bogon=true) OR (src_is_internal!=true src_is_bogon=true)))
| search (NOT dest=[| inputlookup mentor_assets.csv | search nt_host="*-ddi-*" | table ip] NOT src=[| inputlookup mentor_assets.csv | search nt_host="*-ddi-*" | table ip] ((dest_is_internal!=true dest_is_bogon=true) OR (src_is_internal!=true src_is_bogon=true)))
| eval bogon_ip=if(((dest_is_bogon == "true") AND (dest_is_internal != "true")),dest,bogon_ip), bogon_ip=if(((src_is_bogon == "true") AND (src_is_internal != "true")),src,bogon_ip)
| fields + sourcetype, src, dest, bogon_ip
I added the following line:
| search (NOT dest=[| inputlookup mentor_assets.csv | search nt_host="*-ddi-*" | table ip] NOT src=[| inputlookup mentor_assets.csv | search nt_host="*-ddi-*" | table ip] ((dest_is_internal!=true dest_is_bogon=true) OR (src_is_internal!=true src_is_bogon=true)))
But like I stated it does not seem to be working like I thought it should. Any help would be appreciated.
Thanks
ed
... View more