All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The First Law of asking an answerable question states: Present your dataset (anonymize as needed), illustrate desired output from illustrated dataset, explain the logic between illustrated dataset a... See more...
The First Law of asking an answerable question states: Present your dataset (anonymize as needed), illustrate desired output from illustrated dataset, explain the logic between illustrated dataset and desired output. (Without SPL.) If attempted SPL does not give desired output, also illustrate actual output (anonymize as needed), then explain its difference from desired results if it is not painfully clear. I am able to pull my AD users account information successfully except for their email addresses.  Can you explain from which source are you pulling AD info?  Your SPL only uses a lookup file.  Do you mean lookup table AD_Obj_User contains email addresses but the illustrated SPL does not output them, or your effort to populate AD_Obj_User fails to obtain email addresses from a legitimate AD source (as @deepakc speculated)? If former, what is the purpose of the SPL?  What is the content of AD_Obj_User?  What is the desired output and the logic between the content and desired output? If latter, what is the purpose of showing SPL?
It’s mainly around performance, time to value, and using all the ES feature, you could be a large enterprise ingesting loads of data sources, and a big SOC operation, and you might want to run many m... See more...
It’s mainly around performance, time to value, and using all the ES feature, you could be a large enterprise ingesting loads of data sources, and a big SOC operation, and you might want to run many many different correlations rules, this would not be practical on raw data, and it would take a long time to develop new rules when so many come out of the box. So, this is where DM's come into play, faster and better all round.     For you it sounds like you have just a few use cases and can run your own rules on raw data, and if your happy with that, then that’s fine. But you’re not then exploiting what ES has to offer and all the use cases build around data models. 
Hi @deepakc  - Good Morning. Thank you, this is really helpful. You have a great day! Best Regards, Dilip
Hi @gcusello  - Good Morning. Thank you for the wonderful help and guidance. I can now able to proceed with this. I highly appreciate your help. You have a great day! Best Regards, Dilip K Mondal
You might be able to change the layout type in the code. However, the coordinates and other configuration parameters need to be adjusted based on the dimensions. Make a clone/copy of the dashboard t... See more...
You might be able to change the layout type in the code. However, the coordinates and other configuration parameters need to be adjusted based on the dimensions. Make a clone/copy of the dashboard to try so that original dashboards are not affected   "layout": { "type": "grid", "options": { "width": 1440, "height": 960 },   Having said that, absolute layout type gives you lot of flexibility . https://docs.splunk.com/Documentation/Splunk/9.2.1/DashStudio/Layouts In your case, why cant you add all the tables into the canvas? The size of the canvas can be changes as you might have already explored. If you could share the dashboard code, probably we get a better idea about the actual situation.  
It  could be a permissions issue you need read the email address attribute ((&(objectClass=user)(objectCategory=person)(mail=*))) check the user permissions that is being used to pull the LDAP data... See more...
It  could be a permissions issue you need read the email address attribute ((&(objectClass=user)(objectCategory=person)(mail=*))) check the user permissions that is being used to pull the LDAP data, see your AD admin. Or run something like the below to check under that user account.   dsquery user -samid username | dsget user -email If not, find out how it’s being populated, normally its done via the ldap search command see references below. Check the ldap search that creates the lookup and you should have the data there, this may have been created already as a secluded search. Reference: Ldap Search using the command https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.8/User/Theldapsearchcommand Ldap Add-on https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.8/User/AbouttheSplunkSupportingAdd-onforActiveDirectory 
Check your csv file, it might be to do with the csv formatting, create a simple test csv file, with a few headers and data and see if that goes through. If it does you can then check your csv and ens... See more...
Check your csv file, it might be to do with the csv formatting, create a simple test csv file, with a few headers and data and see if that goes through. If it does you can then check your csv and ensure its correctly formatted. 
Hello @deepakc , thanks for your post. As I mentioned in my post, I knew about the data acceleration and ability to run the searches across multiple sources. Undoubtedly, these are the main advant... See more...
Hello @deepakc , thanks for your post. As I mentioned in my post, I knew about the data acceleration and ability to run the searches across multiple sources. Undoubtedly, these are the main advantages of using data models. However, regarding the usage of data models in Splunk ES, I have a custom correlation search that is running without the usage of data models, and it works perfectly fine, which leaves the question about the need of usage of data models in correlation searches in ES still open.  
Normal searches run on the Raw Data and Datamodels are a populated dataset based of the Raw data target fields, hence the Data models are faster.   Data models normalize and standardise data across ... See more...
Normal searches run on the Raw Data and Datamodels are a populated dataset based of the Raw data target fields, hence the Data models are faster.   Data models normalize and standardise data across multiple indexes, allowing users to analyse data consistently regardless of the source. They include accelerations and summaries, such as data summaries, these accelerations speed up searches and make analysis faster and more efficient, especially for large datasets. Overall, using data models in Splunk enhances data analysis capabilities, improves performance, and simplifies the process of exploring and understanding data. ES is based on Datamodels, so you index you security data sources like Firewall/IDS,Network/Auth in the normal way, you then ensure you install the CIM (Common Information Model) Compliant TA's for those data sources, and after you tune the Datamodels for searching your target indexes and it will search and populate the Datamodels based on the types of data sources. Once all in and configured ES lights up, and you can deploy various Correlation Rules which mostly run on the datamodels. (Simple explanation)  Example: You want to ensure your Network Firewall Traffic Data Model is available for data for ES, you then ingest Cisco ASA data into your normal index, you then ensure you install the Cisco ASA TA from Splunkbase, you then tune CIM for this data so it searches it and populates Network_Traffic data model on a regular basis and from there you can run various Rules or create your own, using tstats etc.     See the below for the various data models and various normalised fields https://docs.splunk.com/Documentation/CIM/5.0.2/User/Howtousethesereferencetables
Hello splunkers! I have a simple question regarding Splunk data models and regular searches, I have found some answers, but I would like to dig deeper.  What's the advantage of using the data mod... See more...
Hello splunkers! I have a simple question regarding Splunk data models and regular searches, I have found some answers, but I would like to dig deeper.  What's the advantage of using the data models? Like, why would we want to use the data models instead of regular searches where we just label the indexes in which we want to search for the data?  I know so far that the data models allow searching through multiple sources (network devices and workstations) by having the standardized fields. I also know about the data accelaration, that we can use tstats in our searches on accelerated data models in order to speed up the searches. Is there a particular scenario where we must use data models and not using them will not work? (I am using Enterprise Security as well, so if there is any scenario that involves this app, it is most welcome) I would really appreciate a well-detailed answer. Thank you for taking time reading and replying to my post
Anecdotal but I found a few other log shoveling vendors appeared to have similar issues with the Forwarded log and Server 2022. Agent crashing/restarting constantly, but they seem to have patched the... See more...
Anecdotal but I found a few other log shoveling vendors appeared to have similar issues with the Forwarded log and Server 2022. Agent crashing/restarting constantly, but they seem to have patched their problems already. Fix Windows eventchannel forwarded events by nbertoldo · Pull Request #20594 · wazuh/wazuh (github.com) [Winlogbeat] Repeated warnings · Issue #36020 · elastic/beats (github.com) Interesting at least.
I don't really have a solution.  I was going to suggest multiple white lists, but you said that didn't work for you. Also, you want to filter on AccountName and ObjectName, but those fields are not ... See more...
I don't really have a solution.  I was going to suggest multiple white lists, but you said that didn't work for you. Also, you want to filter on AccountName and ObjectName, but those fields are not supported by whitelist/blacklist.  See https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Inputsconf#Event_Log_allow_list_and_deny_list_formats for the list of supported fields. Consider ingesting the Windows events in XML format and filtering them using the $XmlRegex key.  See https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/MonitorWindowseventlogdata#Use_allow_lists_and_deny_lists_to_filter_on_XML-based_events for more information.
Spllunk apps--> splunk app for lookup editing --> select import file, while uploading the file, it is not uploading, no error message.  screen still same import pop options.  Please guide me how t... See more...
Spllunk apps--> splunk app for lookup editing --> select import file, while uploading the file, it is not uploading, no error message.  screen still same import pop options.  Please guide me how to fix this issue.
Example using makeresults command for the Json data | makeresults | eval json_data="{\"pyOptions\":{\"HasTelephonyPriv\":\"true\",\"isSnapshotOnly\":\"\",\"pyAutoLogin\":\"\",\"pyClientHandle\":\"HE... See more...
Example using makeresults command for the Json data | makeresults | eval json_data="{\"pyOptions\":{\"HasTelephonyPriv\":\"true\",\"isSnapshotOnly\":\"\",\"pyAutoLogin\":\"\",\"pyClientHandle\":\"HEWR40W8VLO39ZP5OVIBJKMZKEF8YETH5A\",\"pyDeviceState\":\"\",\"pyNumberOfLines\":\"3\",\"pyPegaCTIError\":\"\",\"pyTelephonyMode\":\"1\",\"pyThisPageAsJSON\":\"\",\"pyUserIdentifier\":\"user1234\",\"pyUserName\":\"\",\"pyUserPassword\":\"\",\"pyWorkMode\":\"Busy\",\"queue\":[\"\"]},\"pyPageExists\":\"false\",\"pyPort\":\"7017\",\"pyPresenceAgent\":\"H-GET\",\"pySelectedLinkName\":\"CHANNELSERVICES-ADMIN-CTILINK-LOCAL-JTAPI AVAYAPBX1\",\"pySSLProtocolVersion\":\"TLSv1.2\",\"pyStatusMessage\":\"Couldn't connect to server\",\"pyStatusValue\":\"Fail\",\"pySwitchType\":\"Avaya EAS CM\",\"pyVendor\":\"Avaya\",\"pyWorkgroupPhoneBook\":\"true\",\"pzInsKey\":\"CHANNELSERVICES-ADMIN-CTILINK-LOCAL-JTAPI AVAYAPBX1\",\"pzLoadTime\":\"May 3, 2024 9:00:35 AM CDT\",\"pzOriginalInstanceKey\":\"CHANNELSERVICES-ADMIN-CTILINK-LOCAL-JTAPI AVAYA-1\",\"pzPageNameBase\":\"D_CTILinkInfo\",\"LogoutReasonCodes\":[],\"NotReadyReasonCodes\":[],\"pyThisDN\":\"24181\",\"pyWorkMode\":\"Busy\"}" | eval pyUserIdentifier=spath(json_data,"pyOptions{}.pyUserIdentifier") | eval pyStatusMessage=spath(json_data,"pyStatusMessage") | stats count BY pyUserIdentifier,pyStatusMessage If using the spath command the data must be well-formatted as per standards https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Spath If you are using indexed_extractions=JSON or KV_MODE=JSON in the props.conf file, then you don't need to use the spath command as it auto extract the fields/values for you and you can then use the stats command based on your fields, and this is the preferred option as it auto extract the fields/values for you. If you don't know what this is Speak to your Splunk Admin to onboard the json data correctly.  
Hi @jason_hotchkiss, instead using static values, you could use a search like the following: | makeresults | eval my_field="MyValue1" | append [ | makeresults | eval my_field="MyValue2"... See more...
Hi @jason_hotchkiss, instead using static values, you could use a search like the following: | makeresults | eval my_field="MyValue1" | append [ | makeresults | eval my_field="MyValue2" ] | append [ | makeresults | eval my_field="MyValue3" ] | sort my_field | table my_field in this way, you can use the field "my_field" as the values in the token. Ciao, Giuseppe
Hi @DilipKMondal , please try something ike this: <your_search> | spath | rename pyOptions.pyUserIdentifier AS pyUserIdentifier pyOptions.pyStatusMessage AS pyStatusMessage | stats ... See more...
Hi @DilipKMondal , please try something ike this: <your_search> | spath | rename pyOptions.pyUserIdentifier AS pyUserIdentifier pyOptions.pyStatusMessage AS pyStatusMessage | stats count AS "Count of occurences" BY pyUserIdentifier pyStatusMessage | eval counter=1 | accum counter as "#" | table "#" pyUserIdentifier pyStatusMessage "Count of occurences" Ciao. Giuseppe
I'm starting to think if the Windows HOST has the NetBIOS name then this is what you end up with in Deployment server HOST column, unless during install it doesn’t have the FQDN name (+DNS). I  kno... See more...
I'm starting to think if the Windows HOST has the NetBIOS name then this is what you end up with in Deployment server HOST column, unless during install it doesn’t have the FQDN name (+DNS). I  know your searches are coming up with FQDN, so I'm stumped as to the hosts  column part not showing FQDN! This setting will change from the GUID setting to the FQDN Names Client Name, in Deployment Server And allow Filter on FQDN deploymentclient.conf [deployment-client] clientName = FQDN This setting will change the instance name in Deployment Server server.conf serverName = FQDN    
Normally we can pass parameter to saved search by args.* form, but how to pass parameter not starting with args. such as $host$. In spl, savedsearch can pass parameter correctly, but if I invoke save... See more...
Normally we can pass parameter to saved search by args.* form, but how to pass parameter not starting with args. such as $host$. In spl, savedsearch can pass parameter correctly, but if I invoke saved search dispatch action by rest api, parameter not starting with args can't be accepted, it will return an error. Sample saved search query with host as one of the parameters that I want to substitute at runtime: index=fooindex sourcetype=foosourcetype host=$args.host$ Sample JS code to dispatch with argument substitution: mySavedSearch.dispatch({"args.host": "foohost"}, function(err, job) {
Hello, @gcusello , thanks for the additional information. I tested this case in my lab environment and it worked! I just want to clarify some small details. I have added the maxQueueSize in the /S... See more...
Hello, @gcusello , thanks for the additional information. I tested this case in my lab environment and it worked! I just want to clarify some small details. I have added the maxQueueSize in the /SplunkUniversalForwarder/etc/apps/SplunkUniversalForwarder/local outputs.conf, for I have configured that file in that path before in order to send logs to Splunk, but I also found this article  Howto configure SPLUNK Universal Forwarder (kura2gurun.blogspot.com) , where it says that we should configure outputs.conf file, located at /opt/splunkforwarder/etc/system/local/.  Is there any impact or difference that I didn't configure outputs in that specific path, but instead did it in the one that I mentioned above? Cheers, SplunkyDiamond
Now you see the importance of illustrating data accurately.  My could only give you channel because the only data snippet I could see has channel.  Now, you can see that accountNumber is a subnode in... See more...
Now you see the importance of illustrating data accurately.  My could only give you channel because the only data snippet I could see has channel.  Now, you can see that accountNumber is a subnode in REQUEST.body.customer, serialNumber is a subnode in REQUEST.body.equipment, while redemptionEquipmentMemory and transactionReferenceNumber are those in RESPONSE.body.model.  Your initial data snippet already established that Channel is a subnode in REQUEST.headers. All this is to say that to write the correct SPL, you need to understand data.  Before trying to render results, use SPL to help analyze data. Now that you know where in the JSON structure each of those fields lies, you can just extract each node.  But doing so usually is too laborious and not good for maintenance and enhancement.  So, I will give a more flexible code   index="wireless_retail" source="CREATE_FREEDOM.transactionlog" OPERATION="/FPC/Redemption/Redeem" | rex "REQUEST=\"(?<REQUEST>.+)\", RESPONSE=\"(?<RESPONSE>.+)\", RETRYNO" | spath input=REQUEST path=headers | spath input=REQUEST path=body output=REQUEST | spath input=RESPONSE path=body output=RESPONSE | foreach headers REQUEST RESPONSE [spath input=<<FIELD>>] ```| spath input=RESPONSE path=headers.set-cookie{} | mvexpand headers.set-cookie{}``` | foreach customer equipment model [rename <<FIELD>>.* AS *] |table accountNumber serialNumber Channel redemptionEquipmentMemory transactionReferenceNumber   This is an emulation of your sample data   | makeresults | eval _raw = "2024-05-02 23:40:22.000, ID=\"5e2276d3-7f02-7984-ad4b-e11507580872\", ACCOUNTID=\"5\", ACCOUNTNAME=\"prd\", APPLICATIONID=\"6\", APPLICATIONNAME=\"ws\", REQUEST=\"{\"body\":{\"customer\":{\"accountNumber\":\"DBC00089571590\",\"lineNumber\":\"8604338\"},\"equipment\":{\"serialNumber\":\"359938615394762\",\"grade\":\"A\"},\"redemptionDetails\":{\"redemptionDate\":\"20240502\",\"user\":\"WVMSKaul\",\"storeNumber\":\"WD227907\",\"dealerNumber\":\"2279\"}},\"headers\":{\"content-type\":\"application/json;charset=UTF-8\",\"Accept\":\"application/json;charset=UTF-8\",\"Channel\":\"6\",\"Locale\":\"en-US\",\"TransactionID\":\"65E5519B-F170-4367-AA03-54A33BA29B4E\",\"ApplicationID\":\"00000411\",\"Authorization\":\"Basic ZnJlZWRvbWNyZWF0ZTpDd0t4dGlmbGZ3ZnFaQVYydWhtUg==\"}}\", RESPONSE=\"{\"body\":{\"model\":{\"isRedeemed\":true,\"transactionReferenceNumber\":\"6200753992\",\"redeemType\":\"Original\",\"redemptionFailureReasonType\":null,\"redemptionEquipmentMake\":\"Samsung\",\"redemptionEquipmentModel\":\"Galaxy S21 FE 128GB Graphite\",\"redemptionEquipmentMemory\":\"128 GB\",\"committedPrice\":1,\"additionalFees\":0},\"code\":200,\"messages\":null,\"isSuccess\":true},\"headers\":{\"connection\":\"close\",\"content-type\":\"application/json;charset=utf-8\",\"set-cookie\":[\"AWSELB=B3A9CDE108B7A1C9F0AFA19D2F1D801BC5EA2DB758E049CA400C049FE7C310DF0BB906899FF431BCEF2EF75D94E40E95B107D7A5B122F6844BA88CEC0D864FC12E75279814;PATH=/\",\"AWSELBCORS=B3A9CDE108B7A1C9F0AFA19D2F1D801BC5EA2DB758E049CA400C049FE7C310DF0BB906899FF431BCEF2EF75D94E40E95B107D7A5B122F6844BA88CEC0D864FC12E75279814;PATH=/;SECURE;SAMESITE=None\",\"visid_incap_968152=gpkNFRF6QtKeSmDdY/9FWWUkNGYAAAAAQUIPAAAAAABmisXXPd3Y2+ulqGUibHZU; expires=Fri, 02 May 2025 07:12:03 GMT; HttpOnly; path=/; Domain=.likewize.com\",\"nlbi_968152=FnwQGi3rMWk+u+PCILjsZwAAAACniSzzxzSlwTCqfbP87/10; path=/; Domain=.likewize.com\",\"incap_ses_677_968152=2ZElDA77lnjppwgU8y9lCWUkNGYAAAAArXuktDctGDMtVtCwqfe5bw==; path=/; Domain=.likewize.com\"],\"content-length\":\"349\",\"server\":\"Jetty(9.4.45.v20220203)\"}}\", RETRYNO=\"0\", ENDPOINT=\"https://apptium.freedommobile.ca/Activation.TradeUp\", OPERATION=\"/FPC/Redemption/Redeem\", METHOD=\"POST\", CONNECTORID=\"0748a993-4566-48ae-9885-2a4dce9de585\", CONNECTORNAME=\"Likewize\", CONNECTORTYPE=\"Application\", CONNECTORSUBTYPE=\"REST\", STARTTIME=\"1714693218282\", ENDTIME=\"1714693222213\", RESPONSETIME=\"3931\", SUCCESS=\"1\", CLIENT=\"eportal-services\", CREATEDDATE=\"2024-05-02 23:40:22\", USERNAME=\"WVMSKaul@wmbd.local\", SESSIONID=\"_027c735b-30ed-472c-99e8-6d0748e5a7d9\", ACTIONID=\"5c0a6f88-5a1e-4fdc-a454-01c53fdc0b9b\", TRACKID=\"674e1eed-ba9e-429f-87fc-3b4773b7dd06\"" ``` the above emulates index="wireless_retail" source="CREATE_FREEDOM.transactionlog" OPERATION="/FPC/Redemption/Redeem" ```   The output from emulated data is accountNumber serialNumber Channel redemptionEquipmentMemory transactionReferenceNumber DBC00089571590 359938615394762 6 128 GB 6200753992 Finally, I want to illustrate the most inflexible implementation, custom extraction of the needed fields directly   index="wireless_retail" source="CREATE_FREEDOM.transactionlog" OPERATION="/FPC/Redemption/Redeem" | rex "REQUEST=\"(?<REQUEST>.+)\", RESPONSE=\"(?<RESPONSE>.+)\", RETRYNO" | spath input=REQUEST path=headers.Channel output=Channel | spath input=REQUEST path=body.customer.accountNumber output=accountNumber | spath input=REQUEST path=body.equipment.serialNumber output=serialNumber | spath input=RESPONSE path=body.model.redemptionEquipmentMemory output=redemptionEquipmentMemory | spath input=RESPONSE path=body.model.transactionReferenceNumber output=transactionReferenceNumber | table accountNumber serialNumber Channel redemptionEquipmentMemory transactionReferenceNumber   Since 8.1, you can also implement these one-to-one extractions using json_extract.   index="wireless_retail" source="CREATE_FREEDOM.transactionlog" OPERATION="/FPC/Redemption/Redeem" | rex "REQUEST=\"(?<REQUEST>.+)\", RESPONSE=\"(?<RESPONSE>.+)\", RETRYNO" | eval Channel = json_extract(REQUEST, "headers.Channel") | eval accountNumber = json_extract(REQUEST, "body.customer.accountNumber") | eval serialNumber = json_extract(REQUEST, "body.equipment.serialNumber") | eval redemptionEquipmentMemory = json_extract(RESPONSE, "body.model.redemptionEquipmentMemory") | eval transactionReferenceNumber = json_extract(RESPONSE, "body.model.transactionReferenceNumber") | table accountNumber serialNumber Channel redemptionEquipmentMemory transactionReferenceNumber