All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thankyou so much its works for me
Splunk UBA users not able to Login with Splunk when splunk is on SSO
1. Ther is no such thing as "non-routable" addresses or environment. Every packet can be routed. It can just be your policy that you don't route specific traffic. 2. You must have some form of conne... See more...
1. Ther is no such thing as "non-routable" addresses or environment. Every packet can be routed. It can just be your policy that you don't route specific traffic. 2. You must have some form of connectivity between the sources and the destination Splunk installation. Depending on the details of the installation it can be a straight over-the-internet connection, it can be a local connection, it can be a VPN tunnel. But you must have some connectivity. Otherwise how do you want to provide Splunk with the data to index? Send on floppy disks?
Make sure you are in the lookup editor app context. If you`re using Splunk Cloud: https://<domain_name>.splunkcloud.com/en-GB/manager/lookup_editor/data/ui/views 
The "summaryindex" command is just an alias for "collect" command (I told you you're using that command, didn't I? ) But seriously - yes, summary indexing is a way of producing synthetic events co... See more...
The "summaryindex" command is just an alias for "collect" command (I told you you're using that command, didn't I? ) But seriously - yes, summary indexing is a way of producing synthetic events containing some pre-aggregated values so you can later rely on those values instead of calculating the statistics from the raw data. So the idea is that you produce some set of pre-calculated fields which will be stored in the summary index in a predefined format - that's why you use the stash sourcetype and that's why this sourcetype does not incur any additional license usage.  
it was permission issue for eventtype
Good mornign All, I have several logs with fields which have sibfield. I would like to be able to extract the subfield and append it to the parent. The example should clarify my query. I have a log ... See more...
Good mornign All, I have several logs with fields which have sibfield. I would like to be able to extract the subfield and append it to the parent. The example should clarify my query. I have a log of user modifications. The log would look something like that: Changed Attributes:   SAM Account Name: -   Display Name: -   User Principal Name: -   Home Directory: -   Home Drive: -   Script Path: -   Profile Path: -   User Workstations: -   Password Last Set: 9/12/2023 7:30:15 AM   Account Expires: -   Primary Group ID: -   AllowedToDelegateTo: -   Old UAC Value: -   New UAC Value: -   User Account Control: -   User Parameters: -   SID History: -   Logon Hours: -   I would like to be able to create a table which will have a column which will include the "parent" field: Changed Attributes as well as the child field, for example: CHanged Attributes: Password Last Set.   Altenatively, I would also settle for a table with statically assigned column, lets call it changed data and a sa value have: Password Last Set:  9/12/2023 7:30:15 AM   Another challenge I have (probably candidate for another question on the forum) is to add the value to a table column, only if it has value other than "-" to the right of it. The reason is that only one changed attribue (of all those in the list above) will have any value. I would like to report on what attribue for a user was changed.   Thank you very much in advance for any direction.   Kind Regards,   Mike.
Hi @shriramwasule, if you cannot open a connection with Internet for any system, the only solution is to have a Splunk infrastructure on premise in your segregated network. If instead you can open ... See more...
Hi @shriramwasule, if you cannot open a connection with Internet for any system, the only solution is to have a Splunk infrastructure on premise in your segregated network. If instead you can open the Internet connection only for one system, you could use one Heavy Forwarder (a full Splunk instance that doesn't index data but forward all data to your Private Cloud Splunk Infrastructure) as a concentrator; in this way you can send data to Splunk limiting the Internet connections. It should be better to use two Heavy Forwarders to balance the load and avoid a Single Point of Failure. Ciao. Giuseppe
Hi, @inventsekar , Can you pls create a few fields so that I can create a remaining fields .. Thanks
Dashboards are a way of visualising data from searches. Alerts are a way of generating actions from scheduled searches. Alerts aren't generated from dashboards.
Hi @BTB , as @PickleRick highlighted, you have the "Once" choice: it's visible in your screenshot, why you aren't able to select it? If you cannot select it, I never saw this behaviour! If you rea... See more...
Hi @BTB , as @PickleRick highlighted, you have the "Once" choice: it's visible in your screenshot, why you aren't able to select it? If you cannot select it, I never saw this behaviour! If you really aren't able to select "Once", open a ticket to Splunk Support. Ciao. Giuseppe
My trial got finished and expired almost, I don't want to keep my account, could you guide me how to fully delete account and all related info ? Even the controller GUI keeps showing 500 Internal Se... See more...
My trial got finished and expired almost, I don't want to keep my account, could you guide me how to fully delete account and all related info ? Even the controller GUI keeps showing 500 Internal Server Error and wasn't resolved till now.
Hi @Lax , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points ... See more...
Hi @Lax , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @RSS_STT , sorry! I was focused on the other fields and I forrgot the start of the string, please try this: \"CI\":\s+\"(?<CI_V2>[^;]*);(?<CI_1>[^;\"]*);(?<CI_2>[^;\"]*);*(?<CI_3>[^;\"]*);*(?<CI... See more...
Hi @RSS_STT , sorry! I was focused on the other fields and I forrgot the start of the string, please try this: \"CI\":\s+\"(?<CI_V2>[^;]*);(?<CI_1>[^;\"]*);(?<CI_2>[^;\"]*);*(?<CI_3>[^;\"]*);*(?<CI_4>[^;\"]*);(?<CI_5>[^\"]*) that you can test at https://regex101.com/r/fndJqR/3 Ciao. Giuseppe
Seems to be working for rest of fields by not for CI_V2. Creating field value CI_V2="CI": "V2 . it should be CI_V2 = V2.  
Hi @pm2012  this is a decade old post, but this should give you some ideas..  https://community.splunk.com/t5/Getting-Data-In/How-do-I-tell-if-a-forwarder-is-down/m-p/10407  
Hello, Thanks for your assistance. I will accept your solution. Can you also comment below?   The *** groups of commands***, I meant ** group of searches*** , will use this term moving forward W... See more...
Hello, Thanks for your assistance. I will accept your solution. Can you also comment below?   The *** groups of commands***, I meant ** group of searches*** , will use this term moving forward When I checked "enable summary indexing" on a scheduled report, it automatically appended the following statement at the end of the searches | summaryindex spool=t uselb=t addtime=t index="summary" file="[filename].stash_new" name="test_ip" marker="hostname=\"https://test.com/\",report=\"test_ip\"" index=summary  report=test_ip | dedup sourcetype sourcetype is stash, while the original sourcetype is syslog I read the link you sent, it states that if I change the sourcetype, it will incur license usage:  sourcetypeSyntax: sourcetype=<string>Description: The name of the source type that you want to specify for the events. By specifying a sourcetype outside of stash, you will incur license usage.This option is not valid when output_format=hec.Default: stash The solution you suggested is: split the events so it won't have multivalues before the summary index.. Or can I split multivalues after summary index? Thanks
Hi All, Our scenario is like, in our AWS environment ,we want to collect our logs by using universal forwarder from our Linux, eks and windows server. But the thing here is we don't have internet i... See more...
Hi All, Our scenario is like, in our AWS environment ,we want to collect our logs by using universal forwarder from our Linux, eks and windows server. But the thing here is we don't have internet in our environment, can anyone please suggest a solution on how we can install this forwarder and use to forward our logs to centralize server for monitoring? Basically it's non routable environment And there are 3 resources from where we want to collect logs, Linux server Windows server Eks cluster 
We refer to the golden ticket attack, according to the Kerberos mechanism, a prerequisite for a service ticket request is a user ticket request (or renewal of an existing ticket). When this is not th... See more...
We refer to the golden ticket attack, according to the Kerberos mechanism, a prerequisite for a service ticket request is a user ticket request (or renewal of an existing ticket). When this is not the case and we do not see a corresponding prior login event, the user ticket is suspected to be forged or stolen from another machine. So the logic of the detection is that one of the following corresponding events does not occur before the service ticket request (Eventcode=4769): 1. user ticket (TGT) request (Eventcode=4768). 2. ticket renewal request (Eventcode=4770). 3. Login event (Eventcode=4624).
Is there any prebuilt search (like rest command) to find the number of triggered alerts for a particular dashboard?  if not, can we create a search which helps in identifying which triggered alert i... See more...
Is there any prebuilt search (like rest command) to find the number of triggered alerts for a particular dashboard?  if not, can we create a search which helps in identifying which triggered alert is associated with which dashboard for a specific time period.