All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

do you see something in _internal regarding the execution of the script? What about other scripts, if you create a new action rule with a different script, does this work or are all scripts failing?
You can open a glass table by sharing the URL, of course you have to log in to see it.
HI @Dalton2, let me understand: do you want to have in the same table the values of your ix use cases that you already have, or do you want a solution to each of the six use cases? in the first cas... See more...
HI @Dalton2, let me understand: do you want to have in the same table the values of your ix use cases that you already have, or do you want a solution to each of the six use cases? in the first case, you should create six searches that have as output two columns: UseCase (containing the use cases you have) and value (containing the fould values); than store the results of these six searches in a summary index using the collect command; then you can search in the summary index and use the two stored columns to display results in a table. In the secod case, you want six different use cases that depend on many factors like kind of data, fields, etc... and it's too long for one answer and I hint to divide it in more questions. Ciao. Giuseppe
Thanks @kakawun this worked. Happy to give users this temporarily. Hopefully Splunk will rectify this bug soon. If you can mark your own comment as an answer I would recommend that! Thanks again.
Hi @aditsss, if you don't want the last row with some empty fields, you have to remove empty lines. You can do it knowing the name of the first column (that I don't know) and poning a rule (if the ... See more...
Hi @aditsss, if you don't want the last row with some empty fields, you have to remove empty lines. You can do it knowing the name of the first column (that I don't know) and poning a rule (if the column is called "column1": index="abc" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval True=if(searchmatch("ebnc event balanced successfully"),"✔",""), EBNCStatus="ebnc event balanced successfully", Day=strftime(_time,"%Y-%m-%d") | dedup EBNCStatus Day | search column1=* | table EBNCStatus True Day there is an asterisk outside the quotes in the second eval. Ciao. Giuseppe
Hi @nithys, as I said, the easiest way is to create a lookup ontaining two columns (e.g. choice and value): value choice 0-a Airbag Scheduling 1-b Airbag Scheduling 2-b Airb... See more...
Hi @nithys, as I said, the easiest way is to create a lookup ontaining two columns (e.g. choice and value): value choice 0-a Airbag Scheduling 1-b Airbag Scheduling 2-b Airbag Scheduling 3- Airbag Scheduling 4- Airbag Scheduling 5- Airbag Scheduling 6-a Airbag Scheduling 0-d Material 1-e Material 2-e Material 3- Material 4-d Material 5-d Material 6-d Material 0-e Cost Summary 1-f Cost Summary 2-f Cost Summary 3-e Cost Summary 4- Cost Summary 5- Cost Summary 6- Cost Summary 0-f All 1-e All 2-b All 3-b All 4-md All 5-a All then you could run in the first lookup: | inputlookup mylookup | dedup choice | sort choice | table choice then in the second dropdown you could run: | inputlookup mylookup WHERE choice=$FirstToken$ | dedup value | sort value | table value so you'll have the values relative to the choosen first token Ciao. Giuseppe
i @mohsplunking , if you ne only an alert, as I said, the solution from @bowesmana is perferct and you don't need any additional command. the eval/case  could be useful if you need to display some ... See more...
i @mohsplunking , if you ne only an alert, as I said, the solution from @bowesmana is perferct and you don't need any additional command. the eval/case  could be useful if you need to display some additional information e.g. a level of alert quantity. Ciao. Giuseppe
Hi @love0sxy , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
I had data like this in Splunk. DT=2023-09-13T23:59:56.029-0500|LogId=WFTxLog|AppId=SWBS|AppInst=server1:/apps/comp/swbs/instances/99912|TId=executorWithCallerRunsPolicy-30|ProcId=dc47cf25-2318-4f61... See more...
I had data like this in Splunk. DT=2023-09-13T23:59:56.029-0500|LogId=WFTxLog|AppId=SWBS|AppInst=server1:/apps/comp/swbs/instances/99912|TId=executorWithCallerRunsPolicy-30|ProcId=dc47cf25-2318-4f61-bd33-10f8928916ce|RefMsgNm=creditDecisionInformation2011_02|MsgId=fc1935b6-06c0-42bb-89d1-caf1076fff68|SrcApp=XX|ViewPrefs=CUSTOMER_202108-customerDetailIndicator,customerRelationshipDailyUpdateIndicator,customerRelationshipMonthlyUpdateIndicator,customerRelationshipSummaryIndicator,customerRiskSummaryIndicator~ACCOUNT_201505-accountOverLimitIndicator,creditChargeoffIndicator,creditDelinquencyIndicator,creditLineDecreaseIndicator,creditLineRestrictionIndicator,creditProductNSFIndicator,depositClosedForCauseIndicator,depositHardHoldIndicator,forcedClosedIndicator,nonExpansionQueueIndicator,outOfBoundIndicator,overdraftOnDepositIndicator,returnedDepositItemsIndicator,disasterReliefIndicator~APPLICANT_201111-applicationDetailIndicator,incomeDetailIndicator~MULTIPLE_ACCOUNT_CUSTOMER-riskRecommendationIndicator~CUSTOMER_CLI_201705-customerCLIDataIndicator|TxAct=Response Received|DestId=EIW|MsgNm=customerRltnshipMonthlySummaryTaxId|Elapsed=199|AppBody=SOR_RESP| I had a query like this. index=wf_pvsi_other wf_id=swbs wf_env=prod sourcetype="wf:swbs:profiling:txt" AppBody=SOR_RESP DestId=EIW | table SrcApp, SubApp, RefMsgNm, DestId, MsgNm | fillnull value=NA SubApp | top SrcApp, SubApp, RefMsgNm, DestId, MsgNm limit=100 | rename SrcApp as Channel, SubApp as "Sub Application", RefMsgNm as Message, DestId as SOR, MsgNm as "SOR Message" | fields Channel, "Sub Application", Message,SOR,"SOR Message",count | sort Channel,"Sub Application", Message,SOR, "SOR Message", count Now, my app moved to cloud and the log data is embedded within JSON and event show up in Splunk like this. Application specific log data is present as a value of JSON field "msg". {"cf_app_id":"75390614-95dc-474e-ad63-2358769b0641","cf_app_name":"CA00000-app-uat","cf_org_id":"7cc80b1c-0453-4487-ba19-4e3ffc868cf3","cf_org_name":"CA00000-test","cf_space_id":"dd35773f-63cb-4ed3-8199-2ae2ff1331f8","cf_space_name":"uat","deployment":"cf-ab9ba4f5a0f082dfc130","event_type":"LogMessage","ip":"00.00.000.00","job":"diego_cell","job_index":"77731dca-f4e8-4079-a97d-b65f20911c53","message_type":"OUT","msg":"DT=2023-09-14T21:41:52.638-0500|LogId=WFTxLog|AppId=SWBS|AppInst=ff664658-a378-42f4-4a4d-2330:/home/vcap/app|TId=executorWithCallerRunsPolicy-7|ProcId=92d42ef2-7940-48e8-b4df-1f92790b657e|RefMsgNm=creditDecisionInformation2011_02|MsgId=8f5a46f8-7288-442f-9f56-ecaa05b345af|SrcApp=XX|ViewPrefs=CUSTOMER_201605-customerDetailIndicator,customerRiskSummaryIndicator~ACCOUNT_201505-accountOverLimitIndicator,creditChargeoffIndicator,creditDelinquencyIndicator,creditLineDecreaseIndicator,creditLineRestrictionIndicator,creditProductNSFIndicator,depositClosedForCauseIndicator,depositHardHoldIndicator,forcedClosedIndicator,foreclosureIndicator,loanModifcationPrgrmIndicator,nonExpansionQueueIndicator,outOfBoundIndicator,overdraftOnDepositIndicator,returnedDepositItemsIndicator,SSDILPrgrmIndicator|TxAct=Response Received|DestId=EIW|MsgNm=customerRiskSummaryTaxId|Elapsed=259|AppBody=SOR_RESP|","origin":"rep","source_instance":"0","source_type":"APP/PROC/WEB","tags":{"instance_id":"0","process_instance_id":"ff664658-a378-42f4-4a4d-2330","space_id":"dd35773f-63cb-4ed3-8199-2ae2ff1331f8","space_name":"uat"},"timestamp":1694745712641480200} I want Splunk query output to present like earlier. How to do this? Any suggestions?  
I have opened a case and the Splunk Engineering confirmed that there is a bug due to a code change since 9.0.5 version. The workaround for now is to assign the user with "admin_all_objects" but it i... See more...
I have opened a case and the Splunk Engineering confirmed that there is a bug due to a code change since 9.0.5 version. The workaround for now is to assign the user with "admin_all_objects" but it is very similar to administrative privileges.  
We have also started getting this issue after updating Splunk from 8.2 to 9.1. When I change user role to Admin, they can upload data, when I change back to previous role with the mentioned permissi... See more...
We have also started getting this issue after updating Splunk from 8.2 to 9.1. When I change user role to Admin, they can upload data, when I change back to previous role with the mentioned permissions, they get the same error "User is not allowed to modify the job".  This must be a change in one of the roles in the update, but I'm just trying to figure out which permission is needed for this. My user has the following permissions which is what I read is required to add data: edit_monitor indexes_edit edit_tcp search But still no luck. Will update if I find the specific permission required.
Is it possible to have WarmData stored partially on local indexers' storage and partially on remote storage? My total retention period is 90 days but I would like to keep data for the first 45 days... See more...
Is it possible to have WarmData stored partially on local indexers' storage and partially on remote storage? My total retention period is 90 days but I would like to keep data for the first 45 days on local indexers as warm buckets and then send the warm buckets to AWS S3 bucket to store for another 45 days. I am using the below settings and the data sent to the S3 afrer 90 days and is stored in frozen instead of warm or cold. How can I set warm or cold data in S3? [default] remotePath = volume:remote_store/$_index_name repFactor = auto frozenTimePeriodInSecs = 7776000 If I change the frozenTimePeriodInSecs = 1555200 (45 days) the data will be sent to S3 after 45 days but it will be sent in frozen buckets. I need to send data either as warm or cold buckets. If I setup  [volume:local_store] maxVolumeDataSizeMB = 1000000 That will send data to S3 only after filling 1 TB in local storage.  How can I maintain a time based storage with 45 days on local storage and 45 days on remote storage?
According to the method you mentioned, the problem has been successfully solved. Thank you very much
Use the addtotals command which will sum up the fields you specify - it supports wildcards   | addtotals RemediatedComputerCount* fieldname=RemediatedComputerCount | addtotals cvetotal* fieldname=c... See more...
Use the addtotals command which will sum up the fields you specify - it supports wildcards   | addtotals RemediatedComputerCount* fieldname=RemediatedComputerCount | addtotals cvetotal* fieldname=cvetotal    
When you said you did this I added a new lookup by uploading the CSV file by going to Lookups » Lookup table files » Add new the CSV file was uploaded to this this directory and I can change the per... See more...
When you said you did this I added a new lookup by uploading the CSV file by going to Lookups » Lookup table files » Add new the CSV file was uploaded to this this directory and I can change the permission When you upload the CSV the list of lookups will show your lookup as private  you can then change the permissions to app, which MOVES the file system location of that file to the location inside the app.   Then when you run outputlookup, you will be updating the one in the app folder, which you have previously set to app permission.      
@mohsplunking  index=your_source_index status>=400 status<600 | stats count by ip | where count>100 or you can do  index=your_source_index status>=400 status<600 | top ip | where count > 100 but... See more...
@mohsplunking  index=your_source_index status>=400 status<600 | stats count by ip | where count>100 or you can do  index=your_source_index status>=400 status<600 | top ip | where count > 100 but I would prefer stats over top  
Did you ever find out how to do this? I'm in a similar situation and was hoping you were able to find a workaround?
Thank you @elizabethl_splu . very much appreciate your response. Will keep an eye on the release notes.
Good day, I have this SPL: index=test_7d sourcetype="Ibm:BigFix:CVE" earliest=-1d | search FixletSourceSeverityTxt="Critical" OR FixletSourceSeverityTxt="Urgent" | eval ReleaseDtm=strptim... See more...
Good day, I have this SPL: index=test_7d sourcetype="Ibm:BigFix:CVE" earliest=-1d | search FixletSourceSeverityTxt="Critical" OR FixletSourceSeverityTxt="Urgent" | eval ReleaseDtm=strptime(FixletSourceReleaseDtm, "%Y-%m-%d") | where ReleaseDtm>1672531199 ```Epoch time for 12/31/22 23:59:59, this gets only 2023 fixlets``` | eval _time = ReleaseDtm | makemv FixletCVETxt | eval cvecount=mvcount(FixletCVETxt) | eval cvetotal=cvecount * RemediatedComputerCount | timechart span=1mon sum(cvetotal) as cvetotal sum(RemediatedComputerCount) as RemediatedComputerCount by ComputerDatabaseId     That has the output of:   _time RemediatedComputerCount: 0 RemediatedComputerCount: 1 cvetotal: 0 cvetotal: 1 2023-01 113575 29512 7177373 1619396 2023-02 230629 42239 4779801 1063153 2023-03 253770 55769 7246548 1792831 2023-04 139419 36156 8175076 2192272 2023-05 144117 40335 2781698 740369 2023-06 314684 81621 5574141 1420760 2023-07 163672 47112 13643474 3957452 2023-08 281891 65004 6481422 1587176 2023-09 4596 6508 62384 98312   I would like to add the 'RemediatedComputerCount: #' columns to a total column by month next to the existing columns.  Then add another column with totals for 'cvetotal: #'.  I don't want totals of each column for a grand total at the bottom, just the total of the like columns in a new one by _time. Suggestions?  Thanks!
Hello, Does Tenable not send remediated vulnerabilities to Splunk after it has reported it once? The situation is as follows: A Host ABC had it's CVE-1234-5678 patched in April 2023, for which th... See more...
Hello, Does Tenable not send remediated vulnerabilities to Splunk after it has reported it once? The situation is as follows: A Host ABC had it's CVE-1234-5678 patched in April 2023, for which there is a record in the index. But after that there is not a single time that the Remediated vulnerability has been reported. It only reports on the open ones from there on. I tried enabling the "Historical reporting of remediated vulnerabilities" - but that still isn't helping. As a result, we consider that host to have the vulnerability as "Open". Is this the expected behaviour? I thought this setting would report the remediated vulnerabilities each time the scan runs?