All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @splunkisha , see this https://www.splunk.com/en_us/about-us/splunk-pledge/nonprofit-license-application.html and ask to Splunk a detailed offer. Ciao. Giuseppe
Hello Yuanliu,    I am extremely sorry for the delayed response. Thank you so much for your answer. I was on a medical emergency. Apologies for the delay. I went through your answer and I have me... See more...
Hello Yuanliu,    I am extremely sorry for the delayed response. Thank you so much for your answer. I was on a medical emergency. Apologies for the delay. I went through your answer and I have mentioned the following based on what I understand. If that's incorrect, please advise me.    Please find the following references or pointers as references for the questions you have asked: 1. The lookup table: '8112_domain_whitelist.csv' contains one column with the domains that needs to be whitelisted. 2. sourcetype="ironport:summary". The below mentioned are some of the field values that we get in this sourcetype host source UBA Email Ironport:Summary generator sourcetype action direction eventtype file_name info_max_time info_min_time info_search_time internal_message_id message_size_mb recipient sender src_user src_user_domain Time 3. sourcetype="MSExchange:2013:MessageTracking" this gives success or failure. Meaning if an email is received to the end user (recipient) 4. How frequently are they updated respectively? --> I don't know the answer to this question, I am sorry. Is there a way I could get this answer? I will also ask the SIEM engineers if you advise. 5. Is one extremely large compared with another? --> In terms of number of fields, the sourcetype="MSExchange:2013:MessageTracking" contains less fields and information than the sourcetype="ironport:summary" 6. Expansion of the macro `ut_parse_extended()` lookup ut_parse_extended_lookup url as senderdomain list as list | spath input=ut_subdomain_parts | fields - ut_subdomain_parts 7. Expansion of the macro | `security_content_ctime(end_time)` convert timeformat="%Y-%m-%dT%H:%M:%S" ctime(end_time) 8. Expansion of the macro | `security_content_ctime(start_time)` | convert timeformat="%Y-%m-%dT%H:%M:%S" ctime(start_time) 9. Is there a way I could improve performance as well as improve readability Appreciate your help and support. 
Hi Team, I am a volunteer working in a non profit organization. Is there any splunk pricing available for non profit organizations. Thank you.
Hello ! I am also facing same issue and checked as Env set to SPLUNK_HOME=opt/splunk any help please thanks 
Hi,   Can you try the below config in props: [syslog] TRANSFORMS-set=set_parse,set_null As your transforms stanza says set_parse and props set_parsing.
Hi @yuanliu  I attached with correlationId .So if we extract the result the result go beyond pagination in the table as well right. { "correlationId" : "490cfba0e9f3c770b40", "message" : "Proce... See more...
Hi @yuanliu  I attached with correlationId .So if we extract the result the result go beyond pagination in the table as well right. { "correlationId" : "490cfba0e9f3c770b40", "message" : "Processed all revenueData", "tracePoint" : "FLOW", "priority" : "INFO", "category" : "prc-api", "elapsed" : 472, "locationInfo" : { "lineInFile" : "205", "component" : "json-logger:logger", "fileName" : "G.xml", "rootContainer" : "syncFlow" }, "timestamp" : "2024-03-06T20:57:17.119Z", "content" : { "List of Batches Processed" : [ { "P_REQUEST_ID" : "1005377", "P_BATCH_ID" : "1", "P_TEMPLATE" : "Template", "P_PERIOD" : "MAR-24", "P_MORE_BATCHES_EXISTS" : "Y", "P_FILE_NAME" : "Template20240306102852.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "1005177", "P_BATCH_ID" : "2", "P_TEMPLATE" : "Template", "P_PERIOD" : "MAR-24", "P_MORE_BATCHES_EXISTS" : "Y", "P_FILE_NAME" : "Template20240306102959.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "1005377", "P_BATCH_ID" : "3", "P_TEMPLATE" : "Template", "P_PERIOD" : "MAR-24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306103103.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "1005377", "P_BATCH_ID" : "4", "P_TEMPLATE" : "Template", "P_PERIOD" : "MAR-24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "Template20240306103205.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "1005377", "P_BATCH_ID" : "5", "P_TEMPLATE" : "Template", "P_PERIOD" : "MAR-24", "P_MORE_BATCHES_EXISTS" : "Y", "P_FILE_NAME" : "Template20240306103306.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }, { "P_REQUEST_ID" : "100532177", "P_BATCH_ID" : "6", "P_TEMPLATE" : "ATVI_Transaction_Template", "P_PERIOD" : "MAR-24", "P_MORE_BATCHES_EXISTS" : "Y", "P_ZUORA_FILE_NAME" : "rev_ATVI_Transaction_Template20240306103407.csv", "P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0", "P_RETURN_STATUS" : "SUCCESS" }  
I don't see a correlationID in your sample data.  Is it a root node in JSON or is it in contents as well?  I will assume a root JSON. (See my new emulation below.) Importantly, I am speculating that... See more...
I don't see a correlationID in your sample data.  Is it a root node in JSON or is it in contents as well?  I will assume a root JSON. (See my new emulation below.) Importantly, I am speculating that you want to group all values of these three fields by correlationID.  Is this the requirement?  I will assume yes.  Normally, data people will want to go the path in my previous comment because you don't want to mix-and-match BatchId, RequestID, and Status.  Do you care whether the order are mixed up?  The result display you illustrated doesn't answer this question. I will assume that you do care about order of appearance. (But I do want to warn you that three ordered list of > 100 values are not good for users.  I, for one, would hate to look at such a table.)  If so, you must answer yet more important questions: Do your data contain identical triplets (BatchId, RequestID, Status) with any given correlationID?  If they do, do you care to preserve all triplets?  If you want to preserve all triplets, do you care about the order of events that carry them?  Or do you want to filter out all duplicates?  If you want to remove duplicate triplets, do you care about the order of events that carry them?  If you care about the order, what are the criteria to order them? See, volunteers in this board have no intimate knowledge about your dataset or your use case.  Illustrating a subset of data in text is an excellent start.  But you still need to define the problem to the extent another person who don't have your knowledge to sieve through your data and arrive at the same conclusion as you would get - all without SPL.  If the other person is to read your mind, 9 times out of 10 the mindreader will be wrong. Below I give an emulation that includes a correlationID as root node: | makeresults | eval _raw = "{\"content\" : { \"List of Batches Processed\" : [ { \"P_REQUEST_ID\" : \"177\", \"P_BATCH_ID\" : \"1\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"1r7\", \"P_BATCH_ID\" : \"2\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"1577\", \"P_BATCH_ID\" : \"3\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }, { \"P_REQUEST_ID\" : \"16577\", \"P_BATCH_ID\" : \"4\", \"P_TEMPLATE\" : \"Template\", \"P_PERIOD\" : \"24\", \"P_MORE_BATCHES_EXISTS\" : \"Y\", \"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\", \"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\", \"P_RETURN_STATUS\" : \"SUCCESS\" }] }, \"correlationID\": \"125dfe5\" }" | spath ``` data emulation above ```  
I finally tried this using a different index and it worked just fine. I'm thinking it's an issue with the Forwarded Events channel forwarding issue introduced in v9.1. Once we upgrade to v9.2, it sho... See more...
I finally tried this using a different index and it worked just fine. I'm thinking it's an issue with the Forwarded Events channel forwarding issue introduced in v9.1. Once we upgrade to v9.2, it should work just fine for EventID 4688. Again, thanks for the input!
I did get the exclusion under inputs.conf to work with different indexes using this format, just with double slashes rather than triple or quadruple, so there's just an issue with how my Windows secu... See more...
I did get the exclusion under inputs.conf to work with different indexes using this format, just with double slashes rather than triple or quadruple, so there's just an issue with how my Windows security events are setup. We're upgrading to v9.2 soon in case it's an issue with the arbitrary formatting of the Forwarded Events channel from the v9.1 update. The inputs.conf exclusion seems to work with everything else.
Hi @Ryan.Paredez , Thank you for your support, but I didn't find what I'm looking for. Wanted to look at the repo on Github and check the issues but it's not available, do you know any way to conta... See more...
Hi @Ryan.Paredez , Thank you for your support, but I didn't find what I'm looking for. Wanted to look at the repo on Github and check the issues but it's not available, do you know any way to contact support for the React-Native SDK ? Thanks, Dalia
I am curious how you set up the systemd service with Splunk without running ./splunk as the splunk user. What happens when you try to become the splunk user and run the splunk binary?   If it absol... See more...
I am curious how you set up the systemd service with Splunk without running ./splunk as the splunk user. What happens when you try to become the splunk user and run the splunk binary?   If it absolutely does not work, it might be possible to pass the commands in by modifying the Splunkd.service file: (usually at /etc/systemd/system/Splunkd.service)   It will have a line of: ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd which could have the arguments added: ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd --accept-license --answer-yes   As I don't know how to set my test machine to unaccept the license, I am not able to test this at the moment.
I need to extract timestamp from a JSON log where date and time are on two separate fields. Example below:    { "Date": 240315, "EMVFallback": false, "FunctionCode": 80, "Time": 154915 }   Date ... See more...
I need to extract timestamp from a JSON log where date and time are on two separate fields. Example below:    { "Date": 240315, "EMVFallback": false, "FunctionCode": 80, "Time": 154915 }   Date here is equivalent of 2024-March-15 and the time is 15:49:15 pm. I am struggling to find a way to extract timestamp using props.conf. May you please assist. 
hi, I just installed splunk and was trying to download  add splunk add-on for cisco WSA and got the same error: my username and password are both correct.
Hello, For your connection to be shown as secure going to hostname and IP, both have to be on the certificate.  In our environment, each server has a fqdn (i.e. server1.MyBiz.com) for their connecti... See more...
Hello, For your connection to be shown as secure going to hostname and IP, both have to be on the certificate.  In our environment, each server has a fqdn (i.e. server1.MyBiz.com) for their connection on the production network, and a fqdn (i.e. server1.MyBiz.local) for their connection on the local admin network. So their certificates are requested with CN of the production network fqdn, and a SAN of the admin network SAN. And because we want to continue to access them securely if/when DNS has a bad day, their public and private IP addresses also get SANs.  Maybe lucky, but we've not had any problems getting certificates with multiple SANs. Hope this helps!
Hi if your pfsense is running a node where you could install UF, then just install it. Then you probably could use this https://splunkbase.splunk.com/app/1527 or otherwise add needed inputs.conf for... See more...
Hi if your pfsense is running a node where you could install UF, then just install it. Then you probably could use this https://splunkbase.splunk.com/app/1527 or otherwise add needed inputs.conf for collect log files. If you cannot install UF then you probably can use syslog to send logs to your syslog collector and then send those to splunk with UF or other way e.g. use SC4S. r. Ismo
I have a feeling that it's a problem with those double-quote characters in the eval command. Usually the search string has to be included as a parameter to the REST API request and enclosed with quot... See more...
I have a feeling that it's a problem with those double-quote characters in the eval command. Usually the search string has to be included as a parameter to the REST API request and enclosed with quotes (usually double quotes), so if your search string contains double-quotes, it may cut off at | eval ip= then Splunk will complain about a malformed eval expression. If your search string is indeed enclosed by double-quotes, then make sure that your double-quotes in the search are escaped: search index=* | eval ip=\"8.8.8.8\" | search ip | stats count by index | eval result=if(count>0, \"IP found\", \"IP not found\")
Does the test require you to run that Python library outside of Splunk Enterprise?
Sure thing. When running that search over "Last 30 days", in the resulting _raw field I get: 02/14/2024 00:00:00 +0100, info_min_time=1707865200.000, info_max_time=1710537355.000, info_search_time=1... See more...
Sure thing. When running that search over "Last 30 days", in the resulting _raw field I get: 02/14/2024 00:00:00 +0100, info_min_time=1707865200.000, info_max_time=1710537355.000, info_search_time=1710537355.622, othertestfield=test2, orig_sourcetype=splunkd   which does indeed look like the _time value has defaulted to info_min_time
I'd figure it out.  It's saving the report with the Visualization tab.  Thanks for your help in point me towards the right direction.
Hi Has this ever working? Maybe the easiest fix could be that just load again fresh UF package from your SCP and install it again into your HF.  One reason could be that those certificates have ... See more...
Hi Has this ever working? Maybe the easiest fix could be that just load again fresh UF package from your SCP and install it again into your HF.  One reason could be that those certificates have expired. Another could be that your node have wrong time. r. Ismo