All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

thanks
Also have another doubt. I have written below query to get the specific output. index=xyz sourcetype="automation:csv" source="D:\\Intradiem_automation\\ACD_FILETRACKER.csv" | rex field=_raw "^(?P<A... See more...
Also have another doubt. I have written below query to get the specific output. index=xyz sourcetype="automation:csv" source="D:\\Intradiem_automation\\ACD_FILETRACKER.csv" | rex field=_raw "^(?P<ACD>\w+\.\d+),(?P<ATTEMPTS>[^,]+),(?P<FAIL_REASON>[^,]*),(?P<INTERVAL_FILE>[^,]+),(?P<STATUS>\w+),(?P<START>[^,]+),(?P<FINISH>[^,]+),(?P<INGEST_TIME>.+)" | eval field_in_hhmmss=tostring(INGEST_TIME, "duration") | rename field_in_hhmmss AS INGESTION_TIME_HH-MM-SS | search ACD="*" ATTEMPTS="*" FAIL_REASON="*" INTERVAL_FILE="*" STATUS="*" START="*" FINISH="*" INGESTION_TIME_HH-MM-SS="*" | table ACD, ATTEMPTS, FAIL_REASON, INTERVAL_FILE,INTERVAL_FILE1, STATUS, START, FINISH, INGESTION_TIME_HH-MM-SS | dedup INTERVAL_FILE | sort -START I like to extract the filename "020624.0500" from Interval_file column and create another column name "Filename" beside the Interval_file column and before status column. Please help ACD ATTEMPTS FAIL_REASON INTERVAL_FILE STATUS START FINISH INGESTION_TIME_HH-MM-SS acd.55 1 NULL C:\totalview\ftp\switches\customer1\55\020624.0500 PASS 2024-02-06 11:32:30.057 +00:00 2024-02-06 11:32:52.274 +00:00 00:00:22 acd.55 1 NULL C:\totalview\ftp\switches\customer1\55\020624.0530 PASS 2024-02-06 12:02:30.028 +00:00 2024-02-06 12:02:54.151 +00:00 00:00:24 acd.85 1 NULL C:\totalview\ftp\switches\customer1\85\020624.0500 PASS 2024-02-06 11:31:30.021 +00:00 2024-02-06 11:31:40.788 +00:00 00:00:10
@ITWhisperer  great!! it worked Thank you for quick solution
| eval filelocation=if(like(filelocation,"%\cl1%"),"ACD55","ACD85")
Hi team, I have the same issue and I make sure all keys are correct
  I am looking for specific query where I can alter the row values after the final output and create new column with new value.  For example, I have written the below query : index=csv sourc... See more...
  I am looking for specific query where I can alter the row values after the final output and create new column with new value.  For example, I have written the below query : index=csv sourcetype="miscprocess:csv" source="D:\\automation\\miscprocess\\output_acd.csv" | rex field=_raw "\"(?<filename>\d*\.\d*)\"\,\"(?<filesize>\d*\.\d*)\"\,\"(?<filelocation>\S*)\"" | search filename="*" filesize="*" filelocation IN ("*cl3*", "*cl1*") | table filename, filesize, filelocation Which gives me the following output: filename filesize filelocation 012624.1230 13253.10546875 E:\totalview\ftp\acd\cl1\backup_modified\012624.1230 012624.1230 2236.3291015625 E:\totalview\ftp\acd\cl3\backup\012624.1230 012624.1200 13338.828125 E:\totalview\ftp\acd\cl1\backup_modified\012624.1200 012624.1200 2172.1640625 E:\totalview\ftp\acd\cl3\backup\012624.1200 012624.1130 13292.32421875 E:\totalview\ftp\acd\cl1\backup_modified\012624.1130 012624.1130 2231.9658203125 E:\totalview\ftp\acd\cl3\backup\012624.1130 012624.1100 13438.65234375 E:\totalview\ftp\acd\cl1\backup_modified\012624.1100   BUT, I like the row values to be replaced by "ACD55" where the file location is cl1 and "ACD85" where the file location is cl3 under filelocation column. So the desire output should be: filename filesize filelocation 012624.1230 13253.10546875 ACD55 012624.1230 2236.3291015625 ACD85 012624.1200 13338.828125 ACD55 012624.1200 2172.1640625 ACD85 012624.1130 13292.32421875 ACD55 012624.1130 2231.9658203125 ACD85 012624.1100 13438.65234375 ACD55     The raw events are like below:   "020424.0100","1164.953125","E:\totalview\ftp\acd\cl3\backup\020424.0100" "020624.0130","1754.49609375","E:\totalview\ftp\acd\cl1\backup_modified\020624.0130"   please suggest :
What events are you dealing with? Please share an anonymised sample selection. What do your expected results look like? What have you tried so far?
All these questions can be answered in the cloud monitoring console and you should start there instead of trying to write your own bespoke spl. License > Storage overview is a great place to start. ... See more...
All these questions can be answered in the cloud monitoring console and you should start there instead of trying to write your own bespoke spl. License > Storage overview is a great place to start. There is also DDAS and DDAA searches there. specifically for archive data please review here as per docs: https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/DataArchiver Steps to review the overall size and growth of your archived indexes You might want to review the size and growth of your archived indexes to better understand how much of your entitlement you are consuming. This can help you predict usage and expenses for your archived data. From Splunk Web, go to Settings > Indexes. From the Indexes page, click on a value in the Archive Retention column.    
I'm trying understand the below query to implement. what would be the expected result . Any idea about this query. https://lantern.splunk.com/Splunk_Platform/UCE/Security/Threat_Hunting/Protecting_... See more...
I'm trying understand the below query to implement. what would be the expected result . Any idea about this query. https://lantern.splunk.com/Splunk_Platform/UCE/Security/Threat_Hunting/Protecting_a_Salesforce_cloud... ROWS_PROCESSED>0 EVENT_TYPE=API OR EVENT_TYPE=BulkAPI OR EVENT_TYPE=RestAPI |lookup lookup_sfdc_usernames USER_ID |bucket _time span=1d |stats sum(ROWS_PROCESSED) AS rows BY _time Username |stats count AS num_data_samples max(eval(if(_time >= relative_time(maxtime, "-1d@d"), 'rows',null))) AS rows avg(eval(if(_time<relative_time(maxtime,"-1d@d"),'rows',null))) AS avg stdev(eval(if(_time<relative_time(maxtime,"-1d@d"),'rows',null))) AS stdev BY Username |eval lowerBound=(avg-stdev*2), upperBound=(avg+stdev*2) |where 'rows' > upperBound AND num_data_samples >=7  
Yes, thank you,... we have integrated S3 bucket and from there we are onboarding the logs.
Basically - How to get the below information as this critical for us to know it on daily basis as service provider. 1.How much data ingested to Splunk on daily basis. (We have query for that). 2.Ho... See more...
Basically - How to get the below information as this critical for us to know it on daily basis as service provider. 1.How much data ingested to Splunk on daily basis. (We have query for that). 2.How much data is getting stored on active searchable storage.(Assuming the same should be reflecting in the active searchable storage) on daily basis. 3.How much data has been moved from searchable active(Online Storage) to Active Archive(Offline Storage) on daily basis. 4.How much data has been purged/deleted from Active Archive(Offline Storage) daily.
Horizontal Scan:  External scan against a group of IPs for a single port.    Vertical Scan:  External Single IP being scan against multiple port. 
Sorry for not being more descriptive,  both searches has different indexes. i want to alert when search1 AND search2 result greater than zero.   how long is the time period involved- only one tim... See more...
Sorry for not being more descriptive,  both searches has different indexes. i want to alert when search1 AND search2 result greater than zero.   how long is the time period involved- only one time in a day.  how often will you have this alert scheduled for (different from the first question!) - first and second searches can be done at same time, because right after few seconds of file received file will be processed is it a 1 to 1 relationship between "create" events and and "processing" events - yes  what's the maximum time difference between those two events - maximum 1 hr 1 minute does it matter more if a file gets created but not processed, or does that situation matter less, or is this actually the only thing that matters - yes its critical if file received( search1) and not processed (search2) do you already have the filename being extracted as a field in these two events - yes i have how often do you expect the pair of messages (daily?  hourly?  hundreds per second?) - daily once
Hi,   So sorry. I though I had update and resolved this message. As I was trying to get logged in (it took a while!), you sent the other update. That was not the fix for me. While I had a case op... See more...
Hi,   So sorry. I though I had update and resolved this message. As I was trying to get logged in (it took a while!), you sent the other update. That was not the fix for me. While I had a case open for while with Splunk, I cam across this fix : On the Forwarder : /opt/splunkforwarder/etc/system/local/server.conf Add this Stanza : [httpServer] mgmtMode = tcp   Regards.
Hi @dm2, if you run this search in the SA-CIM_vladiator app, do you see fields? if not (as I understood) you have to share them at Global level. The app leval shared fields are visible only inside... See more...
Hi @dm2, if you run this search in the SA-CIM_vladiator app, do you see fields? if not (as I understood) you have to share them at Global level. The app leval shared fields are visible only inside the app where they are created. Ciao. Giuseppe
Found it : https://splunk.my.site.com/customer/s/article/SSL-enabled-inputs-stopped-receiving-data-after-upgrade-from-Splunk-version-8-x-to-version-9-x
I have the following command to create visualization on choropleth map which works but I get only the  categorical color  and volume as legend (without header),  how can I add more column full countr... See more...
I have the following command to create visualization on choropleth map which works but I get only the  categorical color  and volume as legend (without header),  how can I add more column full country name that I retrieve from geo_attr_countried to the legend and if possible with header of the columns? index=main "registered successfully" | rex "SFTP_OP_(?<country>(\w{2}))" | stats count by country | rename country AS iso2 | lookup geo_attr_countries iso2 OUTPUT country | stats sum by country | rename sum(count) AS Volume | geom geo_countries featureIdField="country" Basically, if possible, I am trying to get a legend something on the right bottom corner like the one below Color Categorical Country Name Volume Color China  124 Color Brazil 25      
Hello, Same symptoms here upgrading from 9.0.5 to 9.1.3... Did you find out what was the workaround ? What did you do ? Thanks ! Ema
Hello Team, hope you are doing well,   - How looks the retention configuration  for 2 years (1 year searchable and 1 year archived) in linux instance. and how it works? (each year has its confi... See more...
Hello Team, hope you are doing well,   - How looks the retention configuration  for 2 years (1 year searchable and 1 year archived) in linux instance. and how it works? (each year has its configuration, how this works). - What are the paths and  instances where those configurations are stored/saved in linux instance. (CLI)? - What link may I use to learn more about retention? Thank you in advance.
Hi everyone, We need quick help overriding ACL for an endpoint from our add-on application. We are making a POST request to endpoint:https://127.0.0.1:8089/servicesNS/nobody/{app_name}/configs/conf-... See more...
Hi everyone, We need quick help overriding ACL for an endpoint from our add-on application. We are making a POST request to endpoint:https://127.0.0.1:8089/servicesNS/nobody/{app_name}/configs/conf-{file_name}, modifying configuration files, but it gives the error: "do not have permission to perform this operation (requires capability: admin_all_objects)".  How do we override this endpoint to use a different capability/role?