All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

When I wrote that command, the values I set were in the right place. However, when I go to IDX and save the data, the converted data does not seem to be saved.
I want to add one more flow based on host I want to store one index, one sourcetype in idx with one index and two sourcetypes.
hi Victor, thank you so much for your response. i attached a file showing what we do in sequence and the popup. maybe this helps to understand what we do. right or wrong.   thanks  max
index=aws* Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias="xXXX" network_environment=test source="API-Gateway-Execution-Logs*" | rex field=... See more...
index=aws* Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias="xXXX" network_environment=test source="API-Gateway-Execution-Logs*" | rex field=_raw "Method response body after transformations: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | where status=200 | rename _time as request_time | fieldformat request_time=strftime(request_time, "%F %T") | join type=inner messageGUID [ search kubernetes_cluster="eks-XXXXX*" index="awsXXXX" sourcetype = "kubernetes_logs" source = *XXXX* "sendData" | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | table messageGUID, _time ] |table messageGUID, request_time, _time   _time is coming as Null as output    Also how can I rename this field also ?      
max_stream_window = <integer> * For the streamstats command, the maximum allow window size. * Default: 10000 This is probably the cause.
I tried the above dashboard code . At the first screenshot...no dropdown is selected     Second screenshot :test envis selected and the query started running query for "Unique User/Unique Cli... See more...
I tried the above dashboard code . At the first screenshot...no dropdown is selected     Second screenshot :test envis selected and the query started running query for "Unique User/Unique Client)  The "np-" value to index source="/aws/lambda/g-lambda-au-test"  "test" is substituted to the query already without selecting data Entity dropdown or time it autoran
Thank you for the help bowesmana. This solution works but it seems to cap my results to 10k Events, is this an inherent splunk thing or am I missing a piece of the puzzle? I did do a search for on... See more...
Thank you for the help bowesmana. This solution works but it seems to cap my results to 10k Events, is this an inherent splunk thing or am I missing a piece of the puzzle? I did do a search for only the INCLUDE=YES events ``` Ensure time descending order and mark the events that have an error ``` | sort - _time | streamstats window=1 values(eval(if(match(log_data,"error"), _time, null()))) as error_time ``` Save the error time and copy the error time down to all following records until the next error ``` | eval start_time=error_time | filldown error_time ``` Now filter events within 60 seconds prior to the error ``` | eval INCLUDE=if(_time>=(error_time-60) AND _time<=error_time, "YES", "NO") ``` Now do the same in reverse, i.e. time ascending order ``` | sort _time | filldown start_time ``` and filter events that are within 60 seconds AFTER the error ``` | eval INCLUDE=if(_time<=(start_time+60) AND _time>=start_time, "YES", INCLUDE) | fields - start_time error_time | search INCLUDE=YES  
Hi @sphiwee  I think the issue is that your current SPL concatenates all your data into a single field (`report`) separated by a line breaks, although its not clear how that line break is interprete... See more...
Hi @sphiwee  I think the issue is that your current SPL concatenates all your data into a single field (`report`) separated by a line breaks, although its not clear how that line break is interpreted by Teams. I have previously had success with Microsoft Teams using Markdown or specific JSON structures (like Adaptive Cards) for rich formatting like tables, especially via webhooks. Simple text won't be interpreted as a table. Technically speaking Teams webhook messages dont support Markdown, and HTML is encoded and treated as text. You can try modifying your SPL to generate a Markdown formatted table directly within the search results. This *might* render correctly in Teams depending on how the alert action sends the payload. Remove your last three lines (`eval row = ...`, `stats values(row) AS report`, `eval report = mvjoin(...)`). Add formatting logic after the `foreach` loops. index="acoe_bot_events" unique_id = * | lookup "LU_ACOE_RDA_Tracker" ID AS unique_id | search Business_Area_Level_2="Client Solutions Insurance" , Category="*", Business_Unit = "*", Analyst_Responsible = "*", Process_Name = "*" | eval STP=(passed/heartbeat)*100 | eval Hours=(passed*Standard_Working_Time)/60 | eval FTE=(Hours/127.5) | eval Benefit=(passed*Standard_Working_Time*Benefit_Per_Minute) | stats sum(heartbeat) as Volumes sum(passed) as Successful avg(STP) as Average_STP,sum(FTE) as FTE_Saved, sum(Hours) as Hours_Saved, sum(Benefit) as Rand_Benefit by Process_Name, Business_Unit, Analyst_Responsible | foreach * [eval FTE_Saved=round('FTE_Saved',3)] | foreach * [eval Hours_Saved=round('Hours_Saved',3)] | foreach * [eval Rand_Benefit=round('Rand_Benefit',2)] | foreach * [eval Average_STP=round('Average_STP',2)] ```--- Start Markdown Formatting ---``` | fillnull value="N/A" Process_Name Business_Unit Analyst_Responsible Volumes Successful Average_STP FTE_Saved Hours_Saved Rand_Benefit ``` Format each row as a Markdown table row ``` | eval markdown_row = "| " . Process_Name . " | " . Business_Unit . " | " . Analyst_Responsible . " | " . Volumes . " | " . Successful . " | " . Average_STP . "% | " . FTE_Saved . " | " . Hours_Saved . " | " . Rand_Benefit . " |" ``` Combine all rows into a single multivalue field ``` | stats values(markdown_row) as table_rows ``` Create the final Markdown table string ``` | eval markdown_table = "| Process Name | Business Unit | Analyst | Volumes | Successful | Avg STP | FTE Saved | Hours Saved | Rand Benefit |\n" . "|---|---|---|---|---|---|---|---|---|\n" . mvjoin(table_rows, "\n") ``` Select only the final field to be potentially used by the alert action ``` | fields markdown_table  In the alert action configuration, you'll need to reference the result field containing the Markdown. Often, you can use tokens like `$result.markdown_table$` Considerations for Markdown Approach: Character Limits: Teams messages and webhook payloads have character limits. Very large tables might get truncated. Rendering: Teams Markdown rendering for tables can sometimes be basic and may is not supported. Alert Action App: Success depends heavily on *how* your Teams alert action sends the payload. Some might wrap it in JSON, others might send raw text. You might need to experiment. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
drwx------- Splunk Splunk TA_Akamai_SIEM ... This is what is there for this app in DS and HF
For example with ls -laR /opt/splunk/etc/deployment-apps/whatever_TA  
Have you tried the table command? index="acoe_bot_events" unique_id = * | lookup "LU_ACOE_RDA_Tracker" ID AS unique_id | search Business_Area_Level_2="Client Solutions Insurance" Category="*" Busine... See more...
Have you tried the table command? index="acoe_bot_events" unique_id = * | lookup "LU_ACOE_RDA_Tracker" ID AS unique_id | search Business_Area_Level_2="Client Solutions Insurance" Category="*" Business_Unit = "*" Analyst_Responsible = "*" Process_Name = "*" | eval STP=(passed/heartbeat)*100 | stats sum(heartbeat) as Volumes sum(passed) as Successful avg(STP) as Average_STP by Process_Name, Business_Unit, Analyst_Responsible | eval Average_STP=round('Average_STP',2) | table Process_Name, Analyst_Responsible, Business_Unit, Volumes, Successful, Average_STP  
Hi @kriznikm , let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hi @dtapia , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Poin... See more...
Hi @dtapia , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I have a query which is a lookup and I have assigned the out to "Report" as I want to send the entirety of the report via teams but im struggling to send it as a table its just the entire and its not... See more...
I have a query which is a lookup and I have assigned the out to "Report" as I want to send the entirety of the report via teams but im struggling to send it as a table its just the entire and its not readable. here's the output in teams and this is my query  index="acoe_bot_events" unique_id = * |lookup "LU_ACOE_RDA_Tracker" ID AS unique_id |search Business_Area_Level_2="Client Solutions Insurance" , Category="*", Business_Unit = "*", Analyst_Responsible = "*", Process_Name = "*" |eval STP=(passed/heartbeat)*100 |eval Hours=(passed*Standard_Working_Time)/60 |eval FTE=(Hours/127.5) |eval Benefit=(passed*Standard_Working_Time*Benefit_Per_Minute) |stats sum(heartbeat) as Volumes sum(passed) as Successful avg(STP) as Average_STP,sum(FTE) as FTE_Saved, sum(Hours) as Hours_Saved, sum(Benefit) as Rand_Benefit by Process_Name, Business_Unit, Analyst_Responsible |foreach * [eval FTE_Saved=round('FTE_Saved',3)] |foreach * [eval Hours_Saved=round('Hours_Saved',3)] |foreach * [eval Rand_Benefit=round('Rand_Benefit',2)] |foreach * [eval Average_STP=round('Average_STP',2)] | eval row = Process_Name . "|" . Analyst_Responsible . "|" . Business_Unit . "|" . Volumes . "|" . Successful . "|" . Average_STP | stats values(row) AS report | eval report = mvjoin(report, " ")
So how to check ownership? I have admin rights in Splunk UI and root user in AWS linux splunk instance...
Won't hurt. But I would fist tried checking ownership, not permissions.
It depends on your architecture. See the Masa diagrams - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 The index-time sett... See more...
It depends on your architecture. See the Masa diagrams - https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 The index-time settings (line breaking, timestamp extraction indexed fields extraction and the such) are applied on first "heavy" (based on a full Splunk Enterprise installation, not UF) component in event's path. So if your ingestion process is UF->idx you need TAs on indexers. If you have TA with modular inputs on HF, the same HF will do the parsing stuff so for data coming from this HF you will need index-time settings in a TA there and search-time settings on SH. If you have a fairly complicated (and very unusual but I can think of a scenario where it could be used) scenario like UF1->HF1->UF2->HF2->idx you need index-time settings on HF1 since it's the first heavy component. It does the parsing and sends the data down as parsed so subsequent components don't need to parse the data.
Can I try giving chmod 755 to that app? Will that work? Or can I remove the app and install it and push it again?
While the solution with a scripted input is a nice trick from the technical point of view, as a seasoned admin I'd advise against using it, especially on environments you have limited/difficult conne... See more...
While the solution with a scripted input is a nice trick from the technical point of view, as a seasoned admin I'd advise against using it, especially on environments you have limited/difficult connectivity with. Any splunkd-spawned solutions which change the general splunkd configuration are prone to leaving your installation in a broken state should anything go wrong. Of course whether it's important depends on how critical the systems are and whether you can tolerate potential downtime vs. what you can save by doing the "automation". The risk is yours. You have been warned.
Since the app is being pulled from DS by the same process which will be using it (or spawning additional processes under the same user), the permissions on the HF should be good. On the DS of course ... See more...
Since the app is being pulled from DS by the same process which will be using it (or spawning additional processes under the same user), the permissions on the HF should be good. On the DS of course the splunkd process must be able to access the whole directory to make an archive of its contents. 0700 should be ok as long as all files and directories are owned by the user the spunkd process is running as.