Json is a structured format so the order of fields should not matter for the recipient. After all you will be addressing the fields by their names. Unless you're manipulating that json on the receivi...
See more...
Json is a structured format so the order of fields should not matter for the recipient. After all you will be addressing the fields by their names. Unless you're manipulating that json on the receiving end using something not fit for json processing. In that case maybe json is not the best format choice.
Hi @hazem , it depends on your Use cases: what do you want to monitor? use a log level that gives you the data you need, I cannot say to you from outside what's the best log level. In general, exc...
See more...
Hi @hazem , it depends on your Use cases: what do you want to monitor? use a log level that gives you the data you need, I cannot say to you from outside what's the best log level. In general, except some situations, I'd avoid the debug level and I'd use Alert level, but, as I said, it depends on your Use Cases. In addition, this is a question for Palo Alto experts not Splunk because they know the contents of each log level. Ciao. Giuseppe
Thank you @gcusello for your reply. Our customer has asked me about the recommended log level that should be sent, for example, from Palo Alto to Splunk. Do you have any answer for this?"
Hi @hazem , for many products there are some guides for integration with Splunk developed by the same third party vendor and the only way is to search on Google (e.g for Sophos you can see at https:...
See more...
Hi @hazem , for many products there are some guides for integration with Splunk developed by the same third party vendor and the only way is to search on Google (e.g for Sophos you can see at https://partnernews.sophos.com/it-it/2021/05/prodotti/splunk-integration-for-sophos-firewall/). Anyway, searching "Splunk Getting data in" you have a guide to Datas ingestion in Splunk: https://docs.splunk.com/Documentation/Splunk/latest/Data/WhatSplunkcanmonitor . At least, the first approach to data ingestion should be identify the technology to ingest and searching the related Add-On in apps.splunk.com, that usually guides users to integration. Ciao. Giuseppe
Hi, the website behind the link STEP | Splunk Training and Enablement Platform: Course and Class Details shows me only classes that cost 750 USD. Do I have to apply any code during the booking proces...
See more...
Hi, the website behind the link STEP | Splunk Training and Enablement Platform: Course and Class Details shows me only classes that cost 750 USD. Do I have to apply any code during the booking process? Furthermore I'm getting following error message when I try to book the class.
Is there any guide on how to configure security products to send their logs to Splunk or what are the recommended logs that should be sent, like the DSM guide in QRadar?
I think it is not possible to change the order of the fields with tojson command. But try to create a json object with eval and json_object function maybe that accomplish your goal. ...your sear...
See more...
I think it is not possible to change the order of the fields with tojson command. But try to create a json object with eval and json_object function maybe that accomplish your goal. ...your search... |eval my_json_object=("timestamp",timestamp,"Subject",Subject,"emailBody",emailBody,"operation",operation)
Hi @Alex_Rus , please try this regex in the blacklist option, (?ms)EventCode: 4624.*Account Name: MY_Account Name.*Source Network Address: MY_Source Network Address otherwise, please try it in the...
See more...
Hi @Alex_Rus , please try this regex in the blacklist option, (?ms)EventCode: 4624.*Account Name: MY_Account Name.*Source Network Address: MY_Source Network Address otherwise, please try it in the transpose.conf (on Indexers) to filter events. Ciao. Giuseppe
Log Name: Security EventCode: 4624 EventType: 0 ComputerName: MY_ComputerName SourceName: Microsoft Windows security auditing. Type: Information RecordNumber: 93370261535 Keywords: Audit Succe...
See more...
Log Name: Security EventCode: 4624 EventType: 0 ComputerName: MY_ComputerName SourceName: Microsoft Windows security auditing. Type: Information RecordNumber: 93370261535 Keywords: Audit Success TaskCategory: Logon OpCode: Info Message: An account was successfully logged on. Subject: Security ID: S-1-0-0 Account Name: - Account Domain: - Logon ID: 0x0 Logon Information: Logon Type: 3 Restricted Admin Mode: No Virtual Account: No Elevated Token: Yes Impersonation Level: Impersonation New Logon: Security ID: S-1-5-21-877741627-2216037581-1742749770-81699 Account Name: MY_Account Name Account Domain: MY_Account Domain Logon ID: 0x2153A91CB Linked Logon ID: 0x0 Network Account Name: - Network Account Domain: - Logon GUID: {-} Process Information: Process ID: 0x0 Process Name: - Network Information: Workstation Name: - Source Network Address: MY_Source Network Address Source Port: Port Detailed Authentication Information: Logon Process: Kerberos Authentication Package: Kerberos Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon session is created. The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network). The New Logon fields indicate the account for whom the new logon was created, i.e., the account that was logged on. The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The impersonation level field indicates the extent to which a process in the logon session can impersonate. The authentication information fields provide detailed information about this specific logon request. Logon GUID is a unique identifier that can be used to correlate this event with a KDC event. Transited services indicate which intermediate services have participated in this logon request. Package name indicates which sub-protocol was used among the NTLM protocols. Key length indicates the length of the generated session key. This will be 0 if no session key was requested.
Hi @Alex_Rus , you have to find a regex to filter your events, you can test your regex using the regex command. then put the regex in inputs.conf: blacklist = key=regex if you want more help, ple...
See more...
Hi @Alex_Rus , you have to find a regex to filter your events, you can test your regex using the regex command. then put the regex in inputs.conf: blacklist = key=regex if you want more help, please share a sample of the logs to filter. Ciao. Giuseppe
If I use blacklist, how can I filter by multiple events at once? I need to filter by Account_Name, Source_Network_Address and eventcode. How stanza will look?
Check the Cisco - https://splunkbase.splunk.com/app/1467 For every logsource / event type with named OEM prasers are required to parse the data, dashboard and contexualization - do add apps i.e cisco
@sayala Firstly, I would say this is a "not best practise" use of tags for the reasons you are coming up against now. Surely something like a custom field would be better as you can both populate...
See more...
@sayala Firstly, I would say this is a "not best practise" use of tags for the reasons you are coming up against now. Surely something like a custom field would be better as you can both populate and use in anyway you want and it comes into Splunk too with the container data if you are using the tags for trending etc? I can't see a REST endpoint for tag management at a system level as this would be your best option to do it at any scale. Unfortunately, for now and without a lot of potential digging, you will need to delete manually. I would advise you to think of a different way though otherwise you will face a buggy UI going forward. Hope this helped!? Happy SOARing
The operation of smartstore has been confirmed. I have a question regarding the 100GB of EBS attached to EC2. If you do not put the max_cache_size setting in indexes.conf, Will it freeze if the ca...
See more...
The operation of smartstore has been confirmed. I have a question regarding the 100GB of EBS attached to EC2. If you do not put the max_cache_size setting in indexes.conf, Will it freeze if the cache is full to 100GB? In another test, an EBS created with 10GB would freeze with a full capacity error if max_cache_size was not set. What I would like to ask is whether if I don't set max_cache_size, it will stop when the volume becomes full.
I have a splunk query which generates output in csv/table format. I wanted to convert this to a json format before writing it into a file. tojson does the job of converting. However the fileds are no...
See more...
I have a splunk query which generates output in csv/table format. I wanted to convert this to a json format before writing it into a file. tojson does the job of converting. However the fileds are not in the order I expect it to be. Table output: timestamp,Subject,emailBody,operation --> resulting JSON output is in the order subject,emailbody,operation,timestamp. How do I manipulate tojson to write fields in this order or is there an alternate way of getting json output as expected?
Hi, I’m trying to enhance the functionality of the "Acknowledge" button in an Splunk IT Service Intelligence episode. When I click on it, I want it to not only change the status to "In Progress" an...
See more...
Hi, I’m trying to enhance the functionality of the "Acknowledge" button in an Splunk IT Service Intelligence episode. When I click on it, I want it to not only change the status to "In Progress" and assign the episode to me, but also trigger an action such as sending an email or creating a ticket in a ticketing system I’m aware that automatic action rules can be set in aggregation policies, but I want these actions to occur specifically when I manually click the "Acknowledge" button. Is there a way to achieve this? Thanks!