All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Use the relative_time function to calculate time offsets. | eval new_time = relative_time(now(), "+1d@d+10h") The format string breaks down as follows: "+1" : this time tomorrow "@d": round off t... See more...
Use the relative_time function to calculate time offsets. | eval new_time = relative_time(now(), "+1d@d+10h") The format string breaks down as follows: "+1" : this time tomorrow "@d": round off the time to 0:00 "+10h": add ten hours  
Because this is a test environment, the logs are being added through the UI's "Add Data" > "Upload" feature. I have a CSV file that contains the logs.  Is this a valid test method?
@cmg I do have to ask why you are doing it this way? The app framework removes all of this necessity     As my old mentor said "Use the platform Luke...erm Tom"   Why not just use the HTTP act... See more...
@cmg I do have to ask why you are doing it this way? The app framework removes all of this necessity     As my old mentor said "Use the platform Luke...erm Tom"   Why not just use the HTTP action and then select the data returned by using the relevant datapath downstream. The HTTP app doesn't show you all the returned fields in the playbook datapaths as the dev couldn't know all returned so stopped at "response_body" or "parsed_response_body". You have to write the path to the returned data yourself.   Best way is to run the action, select it in the activity pane of the container, find the value you want in the JSON presented and click on the key in the window. There should be a datapath -type thing at the top. 0 = * and > = . in the datapath you put in the playbook.  -- Hope this helped! If it solved your issue please mark as a solution for future questions on the same thing. Happy SOARing! --
Hi Guys, Thanks in Advance. I am using transaction command to fetch unique correlationId and i have multiple conditions to be match.below is my query .I am getting result.But not in proper way   ... See more...
Hi Guys, Thanks in Advance. I am using transaction command to fetch unique correlationId and i have multiple conditions to be match.below is my query .I am getting result.But not in proper way       index="mulesoft" (message="API: START: /v1/fin_outbound") OR (message="API: START: /v1/onDemand") OR (message="API: START: /v1/fin_Import") OR (message="API: START: /v1/onDemand") OR (*End of GL-import flow*) OR (tracePoint="EXCEPTION") OR (priority="WARN" AND *GLImport Job Already Running, Please wait for the job to complete*) OR (*End of GL Import process - No files found for import to ISG*) |transaction correlationId | search NOT message IN ("API: START: /v1/fin_Zuora_GL_Revpro_Journals_outbound")|rename content.File.fid as "TransferBatch/OnDemand" content.File.fname as "BatchName/FileName" content.File.fprocess_message as ProcessMsg content.File.fstatus as Status content.File.isg_file_batch_id as OracleBatchID content.File.total_rec_count as "Total Record Count"|eventstats min(timestamp) AS Start_Time, max(timestamp) AS End_Time by correlationId| eval JobType=case(like('message',"%API: START: /v1/onDemand%"),"OnDemand",like('message',"%API: START: /v1/onDemand%"),"OnDemand",like('message',"API: START: /v1/fin_Import"),"Scheduled")| eval Status=case(like('Status' ,"%SUCCESS%"),"SUCCESS", like('Status',"%ERROR%"),"ERROR",like('tracePoint',"%EXCEPTION%"),"ERROR",like('priority',"%WARN%"),"WARN",like('message',"%End of GL Import process - No files found for import to ISG%"),"ERROR")| eval ProcessMsg= coalesce(ProcessMsg,message) | eval StartTime=round(strptime(Start_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval EndTime=round(strptime(End_Time, "%Y-%m-%dT%H:%M:%S.%QZ")) | eval ElapsedTimeInSecs=EndTime-StartTime | eval "Total Elapsed Time"=strftime(ElapsedTimeInSecs,"%H:%M:%S") |rename Logon_Time as Timestamp |table Status Start_Time JobType "TransferBatch/OnDemand" "BatchName/FileName" ProcessMsg OracleBatchID "Total Record Count" ElapsedTimeInSecs "Total Elapsed Time" correlationId|fields - ElapsedTimeInSecs | search Status="*"       Screen shot added in that i want to show only yellow marked values
I have an alert which detects when a log feed has failed The team the alert goes to have asked that I allow them to suppress the alert. I have now created a mailto link within the alert email that ... See more...
I have an alert which detects when a log feed has failed The team the alert goes to have asked that I allow them to suppress the alert. I have now created a mailto link within the alert email that sends and email with a specifically crafted email subject and body that is detected in all future alerts to suppress future alerts for 12hrs. a simple math calculation generates the 12hrs, the epoch timestamp is in the subject header, the alert spl looks at the subject and either suppresses the alert or not. This works perfectly - the technical team have now asked that I vary the suppression as follows If the alert came in before 10AM the suppression remains 12 hours If the alert came in after 10AM then the suppression time would be "until 10AM the following day". So - how do you calculate a time stamp to 10AM the following day. It must be simple but my mind has lost it right now. Something like is current hour >10AM timestamp=tomorrow:10Hrs
Hi @srseceng , how do you take these logs: from a Universal Forwarder or from an Hevy Forwarder? If from an hevy forwarder, the SEDCMD props.conf must be located on the HF. If you receive these lo... See more...
Hi @srseceng , how do you take these logs: from a Universal Forwarder or from an Hevy Forwarder? If from an hevy forwarder, the SEDCMD props.conf must be located on the HF. If you receive these logs from a Universal Forwarder and there ins't any intermediate Heavy Forwarder the props.conf can be located on the Indexers. In other words, parsing and typing is done in the first full Splunk instance that the data are passing through. Then, check if the regex and the sourcetype are correct. Ciao. Giuseppe
Hi @Marcie.Sirbaugh, Thanks for following up and sharing your solution! I love to see it. 
Hi @Gopikrishnan.Ravindran, Since the Community was not able to jump in and help, you can try contacting AppDynamics Support: How do I submit a Support ticket? An FAQ  If you get or find a soluti... See more...
Hi @Gopikrishnan.Ravindran, Since the Community was not able to jump in and help, you can try contacting AppDynamics Support: How do I submit a Support ticket? An FAQ  If you get or find a solution, can you please share your solution here as a reply?
does anyone ever know this issue, I use centos8 stream to install soar 6.2.0 onprem, but it can't read /etc/redhat-release [phantom@10 splunk-soar]$ ./soar-prepare-system --splunk-soar-home /opt/s... See more...
does anyone ever know this issue, I use centos8 stream to install soar 6.2.0 onprem, but it can't read /etc/redhat-release [phantom@10 splunk-soar]$ ./soar-prepare-system --splunk-soar-home /opt/splunk-soar/ --https-port 443 Detailed logs will be located at /opt/splunk-soar/var/log/phantom/phantom_install_log Preparing system for installation of Splunk SOAR 6.2.0.355 Unable to read CentOS/RHEL version from /etc/redhat-release. Traceback (most recent call last): File "/opt/splunk-soar/./soar-prepare-system", line 93, in main pre_installer.run() File "/opt/splunk-soar/install/deployments/deployment.py", line 132, in run self.run_pre_deploy() File "/opt/splunk-soar/usr/python39/lib/python3.9/contextlib.py", line 79, in inner return func(*args, **kwds) File "/opt/splunk-soar/install/deployments/deployment.py", line 146, in run_pre_deploy plan = DeploymentPlan.from_spec(self.spec, self.options) File "/opt/splunk-soar/install/deployments/deployment_plan.py", line 51, in from_spec deployment_operations=[_type(options) for _type in deployment_operations], File "/opt/splunk-soar/install/deployments/deployment_plan.py", line 51, in <listcomp> deployment_operations=[_type(options) for _type in deployment_operations], File "/opt/splunk-soar/install/operations/optional_tasks/rpm_packages.py", line 53, in __init__ self.rpm_checker = RpmChecker(self.get_rpm_packages(), self.shell) File "/opt/splunk-soar/install/operations/optional_tasks/rpm_packages.py", line 63, in get_rpm_packages if get_os_family() == OsFamilyType.el7: File "/opt/splunk-soar/install/install_common.py", line 340, in get_os_family os_version = get_os_version() File "/opt/splunk-soar/install/install_common.py", line 326, in get_os_version return _get_centos_and_rhel_version() File "/opt/splunk-soar/install/install_common.py", line 315, in _get_centos_and_rhel_version raise InstallError("Unable to read CentOS/RHEL version from /etc/redhat-release.") install.install_common.InstallError: Unable to read CentOS/RHEL version from /etc/redhat-release. Pre-install failed.   while I open the /etc/redhat-release [phantom@10 splunk-soar]$ cat /etc/redhat-release NAME="CentOS stream" VERSION="8" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="8" PLATFORM_ID="platform:el8" PRETTY_NAME="CentOS Stream 8" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:8" HOME_URL="https://centos.org/" BUG_REPORT_URL="https://bugzilla.redhat.com/" REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"   welcome for any suggestion
This seems like it should be simple, but all I ever get is a 2 column sankey visualization with the starting event then the end event ribbons where the ribbons represent the count of events. Is th... See more...
This seems like it should be simple, but all I ever get is a 2 column sankey visualization with the starting event then the end event ribbons where the ribbons represent the count of events. Is there any way to create a visualization where each orderId starts with a message like "order received", then that order can process through 5-10 other messages, but not all orders will do this. I want to visualize how many say stop at message 3, while also seeing the paths from message three through the process for those that don't end there.
I have SOAR installed and am trying to figure out how to make configuration changes, specifically for accessing the web interface. We are currently accessing via: https://ipaddress:8000  Overall, I ... See more...
I have SOAR installed and am trying to figure out how to make configuration changes, specifically for accessing the web interface. We are currently accessing via: https://ipaddress:8000  Overall, I am trying to find out how to make it accessible via http, if possible. Along with that, I would like to know where to make general configuration changes similar to web.conf for Splunk. I had to dig around quite a bit to discover the login.html in a templates folder just to add my server names for clarity. Any help would be greatly appreciated! 
Thanks for the info! I deleted the events, updated props.conf, restarted splunk, then uploaded the CSV again - but it is not working yet.   
The case function exits at the first match and does not evaluate the remaining expressions.  Put the AND condition first so it is evaluated before the others. | eval P_RETURN_STATUS=case(like('P_MES... See more...
The case function exits at the first match and does not evaluate the remaining expressions.  Put the AND condition first so it is evaluated before the others. | eval P_RETURN_STATUS=case(like('P_MESSAGE',"%NO NEW BATCH EXISTS%") AND like('P_RETURN_STATUS',"%ERROR%"),"SUCCESS", like('P_RETURN_STATUS' ,"%SUCCESS%"),"SUCCESS", like('P_RETURN_STATUS',"%ERROR%"),"ERROR", 1==1, "???")  
It's possible Splunk doesn't like the \b metacharacter.  Try this alternative. SEDCMD-redact_ssn = s/(\D)\d{3}-?\d{2}-?\d{4}(\D)/\1XXXXXXXXX\2/g I also modified the regex to preserve the characters... See more...
It's possible Splunk doesn't like the \b metacharacter.  Try this alternative. SEDCMD-redact_ssn = s/(\D)\d{3}-?\d{2}-?\d{4}(\D)/\1XXXXXXXXX\2/g I also modified the regex to preserve the characters before and after the SSN and to make the hyphens optional.  
I'm trying to get at the results of a phantom.act() action run, more specifically the Splunk HTTP app  "get file" action. Something as simple as:       # inside a custom code block def get_ac... See more...
I'm trying to get at the results of a phantom.act() action run, more specifically the Splunk HTTP app  "get file" action. Something as simple as:       # inside a custom code block def get_action_results(**kwargs): phantom.debug(action) phantom.debug(success) phantom.debug(results) phantom.debug(handle) return phantom.act('get file', parameters=[{'hostname': '', 'file_path': '/path/to/file'}], assets=["web_server"], callback=get_action_results)         The action will run as expected, however the callback isn't getting the results output. Am I misunderstanding callbacks in this scenario?
I want my send email action email body to be in table view as my search result. How do I pass dynamic token field value.  $result.name$ $result.index$$result.sourcetype$ how do I make fiel... See more...
I want my send email action email body to be in table view as my search result. How do I pass dynamic token field value.  $result.name$ $result.index$$result.sourcetype$ how do I make field value come side by side instead of below.  how I am getting now in my email body  name name2 name3 name4 index index2 index3 index4 sourcetype sourcetype2 sourcetype3 sourcetype4 I want to be like below. name index sourcetype name2 index2 sourcetype2 name3 index3 sourcetype3 name4 index4 sourcetype4 Is it possible to do
Hello, I am testing using SEDCMD on a single Splunk server architecture. Below is the current configuration which is put into /opt/splunk/etc/system/local/ - I am uploading a CSV file which contai... See more...
Hello, I am testing using SEDCMD on a single Splunk server architecture. Below is the current configuration which is put into /opt/splunk/etc/system/local/ - I am uploading a CSV file which contains (fake) individual data including two formats of SSN (xxx-xx-xxxx & xxxxxxxxx). The masking is not working when I upload the CSV file. Can someone help point me in the right direction? props.conf ### CUSTOM ### [csv] SEDCMD-redact_ssn = s/\b\d{3}-\d{2}-\d{4}\b/XXXXXXXXX/g   Included below is FAKE individual data pulled from the CSV file for testing: 514302782,f,1986/05/27,Nicholson,Russell,Jacki,3097 Better Street,Kansas City,MO,66215,913-227-6106,jrussell@domain.com,a,345389698201044,232,2010/01/01 505-88-5714,f,1963/09/23,Mcclain,Venson,Lillian,539 Kyle Street,Wood River,NE,68883,308-583-8759,lvenson@domain.com,d,30204861594838,471,2011/12/01
Unfortunately, no. We used to use Splunk 8.8 when we faced the issue. We thought that since we could not find a solution to this issue we'd upgrade Splunk to 8.9 and DB Connect to its latest version... See more...
Unfortunately, no. We used to use Splunk 8.8 when we faced the issue. We thought that since we could not find a solution to this issue we'd upgrade Splunk to 8.9 and DB Connect to its latest version and see if it fixes itself and it did. So, we just got lucky.
Thanks in Advance . I need to show status  If the P_RETURN_STATUS is success then it SUCCESS,IF error then ERROR ,IF P_RETURN_STATUS is error and P_MESSAGE is NO NEW BATCH EXISTS as SUCCESS .But al... See more...
Thanks in Advance . I need to show status  If the P_RETURN_STATUS is success then it SUCCESS,IF error then ERROR ,IF P_RETURN_STATUS is error and P_MESSAGE is NO NEW BATCH EXISTS as SUCCESS .But already the P_RETURN_STATUS having values as error .How to override when using AND condition     | eval P_RETURN_STATUS=case(like('P_RETURN_STATUS' ,"%SUCCESS%"),"SUCCESS", like('P_RETURN_STATUS',"%ERROR%"),"ERROR",like('P_MESSAGE',"%NO NEW BATCH EXISTS%") AND like('P_RETURN_STATUS',"%ERROR%"),"SUCCESS")      
Hello evryone,  I have a probleme with automatic lookup and role users.  I will explain :  With admin role, everythin is okay, the automatic lookup work and no warnings on my dashboard.  When... See more...
Hello evryone,  I have a probleme with automatic lookup and role users.  I will explain :  With admin role, everythin is okay, the automatic lookup work and no warnings on my dashboard.  When I use an another role, it doesn't work  I've checked all the rights for lookups and knowledge objects, assigning them to the role in question. But nothing works; I'm really out of ideas. Thank you very much for your help.