Hello, you should check DNS records on your server, not sure internal logs can help.
In worst case use this example :
props.conf
[host::<IP address>]
TRANSFORMS-<hostname>=<hostname>_overri...
See more...
Hello, you should check DNS records on your server, not sure internal logs can help.
In worst case use this example :
props.conf
[host::<IP address>]
TRANSFORMS-<hostname>=<hostname>_override
transforms.conf
[<hostname>_override]
REGEX = (.*)
DEST_KEY = MetaData:Host
FORMAT = host::<FQDN>
I believe this article may be of use. "your splunk Id is same which you are using to log into splunk.com e.g. to Download Splunk Enterprise." https://community.splunk.com/t5/Knowledge-Managemen...
See more...
I believe this article may be of use. "your splunk Id is same which you are using to log into splunk.com e.g. to Download Splunk Enterprise." https://community.splunk.com/t5/Knowledge-Management/Where-Can-I-find-my-Splunk-ID/m-p/516404 If this helped, karma is appreciated.
Hello follow Splunkers! We want to ingest Oracle Fusion Application (SaaS) audit logs into Splunk on-prem, and the only way to do this is through the REST API GET method. So, now that I cannot find ...
See more...
Hello follow Splunkers! We want to ingest Oracle Fusion Application (SaaS) audit logs into Splunk on-prem, and the only way to do this is through the REST API GET method. So, now that I cannot find a REST input option in Splunk or any free add-on from Splunk for this task, all I have read over the internet is to develop a script. I need your support to share a sample Python script that should not only pull the logs but also avoid duplicate logs with every pull. Thanks in advance!
Hi @Jamilahmajed,
I’m a Community Moderator in the Splunk Community.
This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend...
See more...
Hi @Jamilahmajed,
I’m a Community Moderator in the Splunk Community.
This question was posted 1 year ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post.
Thank you!
We tried to do similar thing but we found the data from VulDB lacking in terms of precision. The vulnerability description was free-form text, sometimes there were affected versions of software menti...
See more...
We tried to do similar thing but we found the data from VulDB lacking in terms of precision. The vulnerability description was free-form text, sometimes there were affected versions of software mentioned, sometimes not...
Hi @PickleRick , we are receiving the messages from VulDB, we form them and we send to a customer External system. Then we receive assets from their asset management and we compare them with the da...
See more...
Hi @PickleRick , we are receiving the messages from VulDB, we form them and we send to a customer External system. Then we receive assets from their asset management and we compare them with the daily CVEs, there are two fields in the record layout to do this. Ciao. Giuseppe
It would help if you told us where this setting is. You can't expect us to know everything that has the word "splunk" anywhere and be able to administer anything you throw at us. Yes, we're good but ...
See more...
It would help if you told us where this setting is. You can't expect us to know everything that has the word "splunk" anywhere and be able to administer anything you throw at us. Yes, we're good but maybe not that good
At first you got me a bit confused since Splunk has its own internal audit logs... But since you're talking about output from auditd, there are indeed two paths you can go: 1) Limit the source by w...
See more...
At first you got me a bit confused since Splunk has its own internal audit logs... But since you're talking about output from auditd, there are indeed two paths you can go: 1) Limit the source by writing audit rules so that only relevant events are logged (this can also have the nice side effect of lowering load on your audited host slightly and decreased storage needs) 2) Filter the data on the receiving end by props/transforms. This is a viable solution if you're gathering the audit logs in another place as well and want to limit only what is indexed in Splunk or if you cannot write audit rules precisely enough. Of course the general remarks from @gcusello about the "why" side of ingesting those logs are very much relevant.
You can use eventstats as @dtburrows3 already pointed out, just keep in mind that Splunk will put those sums into every single row in your results. It's not an Excel, you cannot merge cells here.
@gcuselloan off-topic question - were you able to do anything reasonable with the VulDB data? We tested the app for a while with one customer but it turned out the returned data was useless due to co...
See more...
@gcuselloan off-topic question - were you able to do anything reasonable with the VulDB data? We tested the app for a while with one customer but it turned out the returned data was useless due to complete lack of any reasonable structure to it.
You can't search data you don't have. So first you have to ask yourself if you have any data regarding your assets in your Splunk and if you do, what kind of information it contains.
Hello, I would like my router/firewall Unifi UDM-SE send his logs to my VM (splunk+ubuntu server). What I have done: - on the proxmox VM no FW (during the test) - on my VM I have two NICs, on...
See more...
Hello, I would like my router/firewall Unifi UDM-SE send his logs to my VM (splunk+ubuntu server). What I have done: - on the proxmox VM no FW (during the test) - on my VM I have two NICs, one for the management (network 205) and one for the remote logging location (splunk - network 203 -same as my udm network). - on my VM, ufw is running, I have opened port 9997 and port 514 . - on my UDM SE, I have forwarded the syslog to my remote splunk server (network 203). On the Splunk server, port 514 and 9997 are listening. Until now, no logs appear on my Splunk. How "ufw" is dealing when running two different networks ? How to add the second NIC (network 203) to Splunk ? Ideas ?
Hi I suppose that this query works for you :
index=notable
| stats count as alert_num by rule_name
| rename rule_name as csearch_label
| lookup savedsearches csearch_label as csearch_label OUTPUTN...
See more...
Hi I suppose that this query works for you :
index=notable
| stats count as alert_num by rule_name
| rename rule_name as csearch_label
| lookup savedsearches csearch_label as csearch_label OUTPUTNEW action.notable.param.security_domain as security_domain, description, eai:acl.app as app
| search app="SplunkEnterpriseSecuritySuite"
| table alert_num, csearch_label, app, security_domain, description | sort - alert_num
Hi @brenner , if it isn't in Spunkbase, it isn't! In my experience rarely there are add-ons for these king of devices, I already created a custom add-on for other storages. You have to create a cu...
See more...
Hi @brenner , if it isn't in Spunkbase, it isn't! In my experience rarely there are add-ons for these king of devices, I already created a custom add-on for other storages. You have to create a custom add-on. To do this, You could use Splunk Add-On Builder (https://splunkbase.splunk.com/app/2962 ) or SA-CIM Vladiator (https://splunkbase.splunk.com/app/2968 ) or both of them. Ciao. Giuseppe
Hi @simuneer , as I said, if you have a CVE list (e.g. the one from VulDB) you can check the contents of the CVE with your data. Otherwise, you have two solutions: you should identify the pattern...
See more...
Hi @simuneer , as I said, if you have a CVE list (e.g. the one from VulDB) you can check the contents of the CVE with your data. Otherwise, you have two solutions: you should identify the pattern to search (e.g. Log4J) in your logs and run a search containing these patternes, have an Asset Inventory and extract from the CVE the device classes to associate the CVE with your assets. As I said, we implemented for a customer a connection with VulDB (it is a paid service), using an app from Spunkbase, and we developed an app to integrate these data with the Customer Asset Inventory. Ciao. Giuseppe
Hi @LearningGuy , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @jessieb_83 , in my mind you should follow a different approach: these are the waterfal questions that you need to answer to define what to index: what do I want to monitor? which are the Use...
See more...
Hi @jessieb_83 , in my mind you should follow a different approach: these are the waterfal questions that you need to answer to define what to index: what do I want to monitor? which are the Use Cases that I want to implement? Which data are mandatory for my Use Cases? When you define your monitoring perimeter (in terms of devices and data sources) to monitor you can implement the filters on your data to index only the data that are required for your Use Cases. If you're speaking of Security Monitoring, you could use the Spunk Security Essentials App (https://splunkbase.splunk.com/app/3435) to define your Use Cases and the mandatory data for them. Ciao. Giuseppe