All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The rex command defaults to the _raw field.  Other fields must be explicitly referenced.  The following works in my sandbox. | makeresults | eval message="<UL> <LI> The first vulnerability occurs b... See more...
The rex command defaults to the _raw field.  Other fields must be explicitly referenced.  The following works in my sandbox. | makeresults | eval message="<UL> <LI> The first vulnerability occurs because Internet Explorer does not correctly determine an obr in a pop-up window.</LI> <LI> The t type that is returned from a Web server during XML data binding.</LI> </UL> <P> &quot;Location: URL:ms-its:C:WINDOWSHelpiexplore.::/itsrt.htm&quot; <P> :<P><A HREF='http://blogs.msdn.com/embres/archive/20/81.aspx' TARGET='_blank'>October Security Updates are (finally) available!</A><BR>" | rex field=message mode=sed "s/\<[^\>]+>//g"  
Would anyone know how to do it?
@richgalloway . This expression is not removing the tags from the raw data   | makeresults | eval message="<UL> <LI> The first vulnerability occurs because Internet Explorer does not correctly det... See more...
@richgalloway . This expression is not removing the tags from the raw data   | makeresults | eval message="<UL> <LI> The first vulnerability occurs because Internet Explorer does not correctly determine an obr in a pop-up window.</LI> <LI> The t type that is returned from a Web server during XML data binding.</LI> </UL> <P> &quot;Location: URL:ms-its:C:WINDOWSHelpiexplore.::/itsrt.htm&quot; <P> :<P><A HREF='http://blogs.msdn.com/embres/archive/20/81.aspx' TARGET='_blank'>October Security Updates are (finally) available!</A><BR>" | rex mode=sed "s/\<[^\>]+>//g"
Hi Team, I am trying to deploy the Splunk UBA node, but I get a bit confused because, in the Splunk UBA operating system requirements, I didn't find whether Red Hat 8.10 or 9.2 was supported or no... See more...
Hi Team, I am trying to deploy the Splunk UBA node, but I get a bit confused because, in the Splunk UBA operating system requirements, I didn't find whether Red Hat 8.10 or 9.2 was supported or not.  I only found the below information. How can I determine if Red Hat 8.10 or 9.2 are supported or not? Operating System: Red Hat Enterprise Linux (RHEL) 8.8 Kernel-Version Tested: 4.18.0-477.10.1.el8_8.x86_64, 4.18.0-372.9.1.el8.x86_64
it's not working as I expect it. I had already knew how to do the description. To simplify, I creating a script for whether it is up or down. If there are no failed alerts, then it is up. I am creati... See more...
it's not working as I expect it. I had already knew how to do the description. To simplify, I creating a script for whether it is up or down. If there are no failed alerts, then it is up. I am creating an event for up or down. If their down, I need to add the list of down host to the description. I can't use my stuff but this was enough to give a better understanding. index=myindex message=" failed*" | table host | dedup host | append [| makeresults annotate=true | eval host="Dummy" | table host] |eventstats count | eval status = if(count<2,"UP","DOWN") | eval severity = if(status="DOWN","Critical","Normal") | eval multiplehost=mvjoin(host, ", ") | eval msg=if(severity="Critical","Host Have Failed", "Host are Successful") | eval description=if(severity="Critical",multiplehost,"").msg I have tried different commands to join it and placed it in various places. I can't seem to get it to add them together into (host1,host2,host3) in a description. 
I need to capture everything except the html tags like </a> <a> </p> </b>. These tags may appear anywhere in the raw data. I was able to come up with regex that matches non capturing group (?:<\/?\w... See more...
I need to capture everything except the html tags like </a> <a> </p> </b>. These tags may appear anywhere in the raw data. I was able to come up with regex that matches non capturing group (?:<\/?\w>) but I am stuck with not able to capture the rest everything in raw data.   Sample:   Explorer is a web-browser developed by Microsoft which is included in Microsoft Windows Operating Systems.<P> Microsoft has released Cumulative Security Updates for Internet Explorer which addresses various vulnerabilities found in Internet Explorer 8 (IE 8), Internet Explorer 9 (IE 9), Internet Explorer 10 (IE 10) and Internet Explorer 11 (IE 11). <P> KB Articles associated with the Update:<P> 1) 4908777<BR> 2) 879586<BR> 3) 9088783<BR> 4) 789792<BR> 5) 0973782<BR> 6) 098781<BR> 7) 1234788<BR> 8) 8907799<BR><BR> Please Note - CVE-2020-9090 required extra steps to be manually applied for being fully patched. Please refer to the FAQ seciton for <A HREF='https://portal.mtyb.windows.com/en-PK/WINDOWS-guidance/advisory/CVE-2020-9090 ' TARGET='_blank'>CVE-2020-9090 .</A><P> QID Detection Logic (Authenticated):<BR> Additionally the QID checks if the required Registry Keys are enabled to fully patch <A HREF='https://portal.msrc.windows.com/en-US/guidance/advisory/CVE-2014-82789' TARGET='_blank'>CVE-2014-2897.</A> (See FAQ Section) <BR> The keys to be patched are: <BR> &quot;whkl\SOFTWARE\Microsoft\Internet Explorer\Main\FEATURE_ENABLE_PASTE_INFO_DISCLOSURE_FIX&quot; value &quot;iexplore.exe&quot; set to &quot;1&quot;.<BR>
Hi @Siddharthnegi , as I said, if you install the above app in your system, there's an example on how to implement the "Null Search Swapper" that's exacly the feature you need. In the example there... See more...
Hi @Siddharthnegi , as I said, if you install the above app in your system, there's an example on how to implement the "Null Search Swapper" that's exacly the feature you need. In the example there's the code to use in the dashboard, that you need only to customize for your searches and panels. Ciao. Giuseppe
Hello, Can someone please help me in extracting nested json fields without regex? I have tried below: 1. Updating KV_mode =json in the search head TA props.conf 2. Updating indexed_extractions=JS... See more...
Hello, Can someone please help me in extracting nested json fields without regex? I have tried below: 1. Updating KV_mode =json in the search head TA props.conf 2. Updating indexed_extractions=JSON in the search head TA props.conf 3. Updating the limits.conf with the spath stanza for the HF TA [spath] extraction_cutoff = 10000 4. Tried mvexpand command also.  Nothing worked. My raw logs looks like this: event": "{\"eventVersion\" "1.08\",\"userIdentity\":{\"type\" "AssumedRole\",\"principalId\" "AROAXYKJUXCU7M4FXD7ZZ:redlock\",\"arn\" "arn:aws:sts::533267265705:assumed-role/PrismaCloudRole-804603675133320192/redlock\",\"accountId\" "533267265705\",\"accessKeyId\" "ASIAXYKJUXCUSTP25SUE\",\"sessionContext\":{\"sessionIssuer\":{\"type\" "Role\",\"principalId\" "AROAXYKJUXCU7M4FXD7ZZ\",\"arn\" "arn:aws:iam::533267265705:role/PrismaCloudRole-804603675133320192\",\"accountId\" "533267265705\",\"userName\" "PrismaCloudRole-804603675133320192\"},\"webIdFederationData\":{},\"attributes\":{\"creationDate\" "2024-05-03T00:53:45Z\",\"mfaAuthenticated\" "false\"}}},\"eventTime\" "2024-05-03T04:09:07Z\",\"eventSource\" "autoscaling.amazonaws.com\",\"eventName\" "DescribeScalingPolicies\",\"awsRegion\" "us-west-2\",\"sourceIPAddress\" "13.52.105.217\",\"userAgent\" "Vert.x-WebClient/4.4.6\",\"requestParameters\":{\"maxResults\":10,\"serviceNamespace\" "cassandra\"},\"responseElements\":null,\"additionalEventData\":{\"service\" "application-autoscaling\"},\"requestID\" "ef12925d-0e9a-4913-8da5-1022cfd15964\",\"eventID\" "a1799eeb-1323-46b6-a964-efd9b2c30a8a\",\"readOnly\":true,\"eventType\" "AwsApiCall\",\"managementEvent\":true,\"recipientAccountId\" "533267265705\",\"eventCategory\" "Management\",\"tlsDetails\":{\"tlsVersion\" "TLSv1.3\",\"cipherSuite\" "TLS_AES_128_GCM_SHA256\",\"clientProvidedHostHeader\" "application-autoscaling.us-west-2.amazonaws.com\"}}"}
Hi Support Team I have two Splunk indexers and two forwarders. Both forwarders have a configuration with index = test in inputs.conf, but there is configuration in the indexers to decide which inde... See more...
Hi Support Team I have two Splunk indexers and two forwarders. Both forwarders have a configuration with index = test in inputs.conf, but there is configuration in the indexers to decide which index to put the data in based on the data itself (one of the values in the json object). Forwarder 1 has been running for a while with no problems (this runs version 6.4.1) Forwarder 2 is new (version 9.2.1), and requires exactly the same configuration as forwarder 1 which I have already done. The only difference is the host (host1 and host2). The data from Forwarder 2 is being sent to the indexers, but the index is not changed based on the config in the indexers. The data goes to the test index as specified in the forwarder config. Both indexers are running 7.3.3. What could I be missing to get the indexers to put the data from forwarder 2 in the correct index? Could this not be working due to the different versions of Splunk? Thanks
It could a be several things post restart. Get some more info from the _internal logs - this may help further investigate and identify the issue index=_internal sourcetype=splunkd splunk_app_db_c... See more...
It could a be several things post restart. Get some more info from the _internal logs - this may help further investigate and identify the issue index=_internal sourcetype=splunkd splunk_app_db_connect index="_internal" sourcetype=splunkd "db_connect" log_level=ERROR   Check the KVstore status | rest splunk_server=local count=1 /services/server/info | table kvStoreStatus   OR $SPLUNK_HOME/bin/splunk show kvstore-status --verbose   Check DB APP permissions chown -R splunk:splunk $SPLUNK_HOME/etc/apps/splunk_app_db_connect   Sometimes it won’t start due to default certs, as they may have expired,  If using the Splunk default certificates, move or rename the file .old etc the $SPLUNK_HOME/etc/auth/server.pem file and restart Splunk to regenerate the certificate.   Check the JAVA server java –version Make sure it's compatible https://docs.splunk.com/Documentation/DBX/latest/DeployDBX/Prerequisites   See if there some additional help via troubleshooting page https://docs.splunk.com/Documentation/DBX/3.16.0/DeployDBX/TroubleshootingTool   If that all fails to help resolve the issue, log a  support case.              
I cannot find any option for recurring Maintenance Window in ITSI?  E.g Stop alerting daily 11pm to 00:00 (1 hour)?  ITSI have something like cron suppression?  Do not tell me to use REST API agai... See more...
I cannot find any option for recurring Maintenance Window in ITSI?  E.g Stop alerting daily 11pm to 00:00 (1 hour)?  ITSI have something like cron suppression?  Do not tell me to use REST API again      
@airforce Hi  The DB connect is what you need for integration with Snowflake Logging. So go with that.  https://docs.splunk.com/Documentation/DBX  https://splunkbase.splunk.com/app/2686  The Snow... See more...
@airforce Hi  The DB connect is what you need for integration with Snowflake Logging. So go with that.  https://docs.splunk.com/Documentation/DBX  https://splunkbase.splunk.com/app/2686  The Snowflake app is for Splunk SOAR (Security Orchestration And Response) application which is for Security Process Functionality, from your question it appears you don't need that . 
can you give me an idea on how to do it
Just collecting the logs is a great start. If you want to collect technical metrics about user interaction you can use the RUM integration as well. And depending what your backend looks like yo... See more...
Just collecting the logs is a great start. If you want to collect technical metrics about user interaction you can use the RUM integration as well. And depending what your backend looks like you could use the opensource OpenTelemetry libraries to instrument your backend application that processes your web application data. There is even a free and opensource Splunk distribution of OpenTelemetry (including the collector) available. 
That is no relevant to my case since I have no orphaned knowledge base items. Just checked with the instruction from the knowledge base and even with all filters set to 'All' the list turns up empty.
OK. What _exactly_ did you try? (Just saying "tried configuring" doesn't tell us anything - it's just like saying - "I tried to go to my mum's" without even specifying whether you wanted to take a bu... See more...
OK. What _exactly_ did you try? (Just saying "tried configuring" doesn't tell us anything - it's just like saying - "I tried to go to my mum's" without even specifying whether you wanted to take a bus, ride a bike or driving a car). And what was the result? Did you get any errors or other messages? How did you verify that something "is not sent"?
OK. Let's start from the beginning. 1. Monitoring files this way requires your forwarder to run with root permissions in order to be able to read all those files. It might be problematic with your s... See more...
OK. Let's start from the beginning. 1. Monitoring files this way requires your forwarder to run with root permissions in order to be able to read all those files. It might be problematic with your security team and is generally not the best idea (although it sometimes can't be avoided indeed). 2. Monitoring the .bash_history files is not the very good idea for monitoring user activity. You can easily manipulate the bash history, you can turn it off completely or bypass it. There are other ways to monitor user activity (some of them are more convenient, some not, I admit). If you want to limit yourself to just bash and have a log of bash history entries you can set the option syslog_history for bash and have it log to local syslog daemon - it's still not a great and fail-safe solution but it's way better than reading each user's separate file. 3. If you want to stick with your option of reading the .bash_history files, you should make sure your events are timestamped - if environmental variable HISTTIMEFORMAT is set, bash uses its contents to format the timestamp it includes in the history file. This way you can have your entries timestamped. You should make this variable persistent across your whole environment (set it in your /etc/profile.d/). Without it the behaviour will be as you're describing - the events are not timestamped so Splunk has no way of telling when the events are from. 4. I hope you don't have too many users on your box because you might run out of file descriptors if you open to omany files. 5. Oh, and BTW, 7.x has been obsolete for some years now so it would be time to consider upgrade
Hi @shakti, use faster disks if you're using a physical server or dedicated resources if you're using a virtual server and possible SSD disks. Ciao. Giuseppe
@gcusello   Thank you for your reply  The IOPS of indexers and search heads is between 50 -300 ...I guess its pretty less  ...May I know do you have any suggestions how to improve on it?
Hi @Siddharthnegi , please try this https://splunkbase.splunk.com/app/1603 Ciao. Giuseppe