All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

| makeresults | fields - _time | eval _raw="<field k=\"_raw\"> <v xml:space=\"preserve\" trunc=\"0\"> \"groupByAction\": \"[{\\\"totalCount\\\": 41117, \\\"action\\\": \\\"update_statistics table\\\"... See more...
| makeresults | fields - _time | eval _raw="<field k=\"_raw\"> <v xml:space=\"preserve\" trunc=\"0\"> \"groupByAction\": \"[{\\\"totalCount\\\": 41117, \\\"action\\\": \\\"update_statistics table\\\"}, {\\\"totalCount\\\": 33793, \\\"action\\\": \\\"reorg index\\\"}, {\\\"totalCount\\\": 22015, \\\"action\\\": \\\"job report\\\"}, {\\\"totalCount\\\": 10252, \\\"action\\\": \\\"reorg table\\\"}, {\\\"totalCount\\\": 8609, \\\"action\\\": \\\"truncate table\\\"}, {\\\"totalCount\\\": 3335, \\\"action\\\": \\\"defrag table\\\"}, {\\\"totalCount\\\": 2628, \\\"action\\\": \\\"add range partitions\\\"}, {\\\"totalCount\\\": 2522, \\\"action\\\": \\\"drop range partitions\\\"}, {\\\"totalCount\\\": 2465, \\\"action\\\": \\\"sp_recompile table\\\"}, {\\\"totalCount\\\": 2227, \\\"action\\\": \\\"update_statistics index\\\"}]\"</v> </field>|<field k=\"_raw\"> <v xml:space=\"preserve\" trunc=\"0\"> \"groupByUser\": \"[{\\\"requestedBy\\\": \\\"rdbmntp\\\", \\\"TotalRequests\\\": 38717}, {\\\"requestedBy\\\": \\\"pstapm\\\", \\\"TotalRequests\\\": 15126}, {\\\"requestedBy\\\": \\\"pirddb\\\", \\\"TotalRequests\\\": 13925}, {\\\"requestedBy\\\": \\\"fiddbtsp\\\", \\\"TotalRequests\\\": 8808}, {\\\"requestedBy\\\": \\\"bkpbs\\\", \\\"TotalRequests\\\": 6513}, {\\\"requestedBy\\\": \\\"arraymgr\\\", \\\"TotalRequests\\\": 5004}, {\\\"requestedBy\\\": \\\"zstapm\\\", \\\"TotalRequests\\\": 4758}, {\\\"requestedBy\\\": \\\"pdspadm\\\", \\\"TotalRequests\\\": 4313}, {\\\"requestedBy\\\": \\\"ptpsadm\\\", \\\"TotalRequests\\\": 3473}, {\\\"requestedBy\\\": \\\"glfinp\\\", \\\"TotalRequests\\\": 3450}]\",</v> </field>|<field k=\"_raw\"> <v xml:space=\"preserve\" trunc=\"0\"> \"lastOneMonth\": \"[{\\\"requestStatus\\\": \\\"Failed\\\", \\\"Total Count\\\": 384}, {\\\"requestStatus\\\": \\\"Succeeded\\\", \\\"Total Count\\\": 3801}, {\\\"requestStatus\\\": \\\"Errors\\\", \\\"Total Count\\\": 540}, {\\\"requestStatus\\\": \\\"Killed\\\", \\\"Total Count\\\": 1}]\",</v> </field>|<field k=\"_raw\"> <v xml:space=\"preserve\" trunc=\"0\"> \"lastOneWeek\": \"[{\\\"requestStatus\\\": \\\"Failed\\\", \\\"Total Count\\\": 384}, {\\\"requestStatus\\\": \\\"Succeeded\\\", \\\"Total Count\\\": 3801}, {\\\"requestStatus\\\": \\\"Errors\\\", \\\"Total Count\\\": 540}, {\\\"requestStatus\\\": \\\"Killed\\\", \\\"Total Count\\\": 1}]\",</v> </field>" | eval event=split(_raw,"|") | mvexpand event | eval _raw=event | fields _raw | spath field.v output=v ``` The lines above creates the event as you have shared ``` ``` Convert the _raw to compliant JSON ``` | eval _raw="{".v."}" ``` Extract the groupByAction field - this resolves the escaped double quotes ``` | spath groupByAction | spath groupByUser | spath lastOneMonth | spath lastOneWeek ``` Extract the groups into a multi-valued field ``` | rex max_match=0 field=groupByAction "(?<group>\{[^\}]+\})" | rex max_match=0 field=groupByUser "(?<group>\{[^\}]+\})" | rex max_match=0 field=lastOneMonth "(?<group>\{[^\}]+\})" | rex max_match=0 field=lastOneWeek "(?<group>\{[^\}]+\})" ``` Expand the multi-value field ``` | mvexpand group ``` Extract the fields from the group ``` | spath input=group | spath input=requestedBy | spath input=requestStatus | spath input=group ``` Output the table ``` | table action totalCount requestedBy TotalRequests requestStatus "Total Count"
Hi, For which user does the installer and service work? It looks like the user does not have file permissions. The installer attempts to run the SPLUNK process after installation. If the Splunk pr... See more...
Hi, For which user does the installer and service work? It looks like the user does not have file permissions. The installer attempts to run the SPLUNK process after installation. If the Splunk process does not start running, the installer makes the assumption that the installation failed then the installer rolls back the installation and removes the Splunk Enterprise instance. If you use domain user or MSA then this account does not have NTFS permissions for Splunk Enterprise installation directory. After installation, you need explicitly assign NTFS permissions from that directory and all subdirectories for the MSA account. However, you cannot do this during installation if you run the msi file directly, and as a result you will get the error that is mentioned above. Solution: Install Splunk from the command line and use the LAUNCHSPLUNK=0 flag to keep Splunk Enterprise from starting after installation has completed. For example : PS C:\temp> msiexec.exe /i splunk-9.0.4-de405f4a7979-x64-release.msi LAUNCHSPLUNK=0 You can complete the installation, and before running SPLUNK, you need to grant the user "Full Control" permissions to the Splunk Enterprise installation directory and all of its subdirectories.
<field k="_raw"> <v xml:space="preserve" trunc="0"> "groupByAction": "[{\"totalCount\": 41117, \"action\": \"update_statistics table\"}, {\"totalCount\": 33793, \"action\": \"reorg index\"}, {\"tot... See more...
<field k="_raw"> <v xml:space="preserve" trunc="0"> "groupByAction": "[{\"totalCount\": 41117, \"action\": \"update_statistics table\"}, {\"totalCount\": 33793, \"action\": \"reorg index\"}, {\"totalCount\": 22015, \"action\": \"job report\"}, {\"totalCount\": 10252, \"action\": \"reorg table\"}, {\"totalCount\": 8609, \"action\": \"truncate table\"}, {\"totalCount\": 3335, \"action\": \"defrag table\"}, {\"totalCount\": 2628, \"action\": \"add range partitions\"}, {\"totalCount\": 2522, \"action\": \"drop range partitions\"}, {\"totalCount\": 2465, \"action\": \"sp_recompile table\"}, {\"totalCount\": 2227, \"action\": \"update_statistics index\"}]"</v> </field>   <field k="_raw"> <v xml:space="preserve" trunc="0"> "groupByUser": "[{\"requestedBy\": \"rdbmntp\", \"TotalRequests\": 38717}, {\"requestedBy\": \"pstapm\", \"TotalRequests\": 15126}, {\"requestedBy\": \"pirddb\", \"TotalRequests\": 13925}, {\"requestedBy\": \"fiddbtsp\", \"TotalRequests\": 8808}, {\"requestedBy\": \"bkpbs\", \"TotalRequests\": 6513}, {\"requestedBy\": \"arraymgr\", \"TotalRequests\": 5004}, {\"requestedBy\": \"zstapm\", \"TotalRequests\": 4758}, {\"requestedBy\": \"pdspadm\", \"TotalRequests\": 4313}, {\"requestedBy\": \"ptpsadm\", \"TotalRequests\": 3473}, {\"requestedBy\": \"glfinp\", \"TotalRequests\": 3450}]",</v> </field>   <field k="_raw"> <v xml:space="preserve" trunc="0"> "lastOneMonth": "[{\"requestStatus\": \"Failed\", \"Total Count\": 384}, {\"requestStatus\": \"Succeeded\", \"Total Count\": 3801}, {\"requestStatus\": \"Errors\", \"Total Count\": 540}, {\"requestStatus\": \"Killed\", \"Total Count\": 1}]",</v> </field>   <field k="_raw"> <v xml:space="preserve" trunc="0"> "lastOneWeek": "[{\"requestStatus\": \"Failed\", \"Total Count\": 384}, {\"requestStatus\": \"Succeeded\", \"Total Count\": 3801}, {\"requestStatus\": \"Errors\", \"Total Count\": 540}, {\"requestStatus\": \"Killed\", \"Total Count\": 1}]",</v> </field>
So either your outputs.conf in the forwarder point to a wrong server or you have DNS problems in your VM.
Have you found any events in _audit for them? (Try searching by their id)
I know that a colleague of mine login to the system today, instead for that query I get that the last login is in 2021. KInd regards Marta  
Hi, To achieve synchronization between your Drupal portal and Apigee Edge for new API products, you can try this-   APIGEE Edge Setup: Set up your API products in Apigee Edge, and make sure each ... See more...
Hi, To achieve synchronization between your Drupal portal and Apigee Edge for new API products, you can try this-   APIGEE Edge Setup: Set up your API products in Apigee Edge, and make sure each API product has a clear name, description, and any required attributes.  Drupal Integration: Utilize Drupal's extensibility and API capabilities to achieve this synchronization. You can create a custom module or use existing modules to connect Drupal with Apigee. As per this msbi taining resource, the custom module has potential to connect Drupal with Apigee. Read this doc- https://www.drupal.org/docs/contributed-modules/apigee-edge/synchronize-developers-with-apigee 
How do you know it is incorrect? How are you validating the results?
@gcusello  I tried this but not coming. Still in table its phrase and keyword only coming. index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-ra... See more...
@gcusello  I tried this but not coming. Still in table its phrase and keyword only coming. index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval keyword=if(searchmatch("ReadFileImpl - ebnc event balanced successfully"),"True","✔")| eval phrase="ReadFileImpl - ebnc event balanced successfully"|table phrase keyword
Please share your complete _raw event in a code block </>
it is tryiing to connect but it failes with name or service uknown
Hi,   I have a excel file on a linux server at a particular path. I have created a input file to monitor this file , but Im not receiving any logs. Can anyone help me how to get that excel daily ... See more...
Hi,   I have a excel file on a linux server at a particular path. I have created a input file to monitor this file , but Im not receiving any logs. Can anyone help me how to get that excel daily by creating  a input.conf 
I modified the query which is somehow fetching the latest details from the index but my event has multiple fields and can you advise how can I achieve the same for the other fields as it's only takin... See more...
I modified the query which is somehow fetching the latest details from the index but my event has multiple fields and can you advise how can I achieve the same for the other fields as it's only taking groupByAction Event as below- It has the fileds - groupByAction, groupByUser, lastOneMonth, lastOneWeek, lastOneDay "groupByUser": "[{\"requestedBy\": \"rdbmntp\", \"TotalRequests\": 38717}, {\"requestedBy\": \"pstapm\", \"TotalRequests\": 15126}, {\"requestedBy\": \"pirddb\", \"TotalRequests\": 13925}, {\"requestedBy\": \"fiddbtsp\", \"TotalRequests\": 8808}, {\"requestedBy\": \"bkpbs\", \"TotalRequests\": 6513}, {\"requestedBy\": \"arraymgr\", \"TotalRequests\": 5004}, {\"requestedBy\": \"zstapm\", \"TotalRequests\": 4758}, {\"requestedBy\": \"pdspadm\", \"TotalRequests\": 4313}, {\"requestedBy\": \"ptpsadm\", \"TotalRequests\": 3473}, {\"requestedBy\": \"glfinp\", \"TotalRequests\": 3450}]"  
Hi @sekhar463, I suppose that "Node" from the second search is the hostname of the first and that you want to use the Node from the second as kay to filter the first search. If this is true, you ca... See more...
Hi @sekhar463, I suppose that "Node" from the second search is the hostname of the first and that you want to use the Node from the second as kay to filter the first search. If this is true, you can use the second search as a subsearch of the first, renaming the field, something like this: index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections os=Windows [ search index=ivz_em_solarwinds source="solwarwinds_query://Test_unmanaged_Nodes_Data" | table Node Account Status From Until | dedup Node | rename Node AS hostnae | fields hostname ] | dedup hostname | eval age=(now()-_time) | eval LastActiveTime=strftime(_time,"%y/%m/%d %H:%M:%S") | eval Status=if(age< 3600,"Running","DOWN") | rename age AS Age | eval Age=tostring(Age,"duration") | lookup 0010_Solarwinds_Nodes_Export Caption as hostname OUTPUT Application_Primary_Support_Group AS CMDB2_Application_Primary_Support_Group, Application_Primary AS CMDB2_Application_Primary, Support_Group AS CMDB2_Support_Group NodeID AS SW2_NodeID Enriched_SW AS Enriched_SW2 Environment AS CMDB2_Environment | eval Assign_To_Support_Group=if(Assign_To_Support_Group_Tag="CMDB_Support_Group", CMDB2_Support_Group, CMDB2_Application_Primary_Support_Group) | table _time, hostname,sourceIp, Status, LastActiveTime, Age, SW2_NodeID,Assign_To_Support_Group, CMDB2_Support_Group,CMDB2_Environment | where Status="DOWN" AND NOT isnull(SW2_NodeID) AND CMDB2_Environment="Production" | sort 0 hostname This solution has only one limitation: the subsearch can have max 50,000 results. Ciao. Giuseppe
Thanks for your suggestions!!
Hi @aditsss, the easiest approach I hint is to use JS and CSS following the instructions in the Splunk Dashboard Examples app (https://splunkbase.splunk.com/app/1603). Otherwise, you could find on ... See more...
Hi @aditsss, the easiest approach I hint is to use JS and CSS following the instructions in the Splunk Dashboard Examples app (https://splunkbase.splunk.com/app/1603). Otherwise, you could find on internet a site to find some special images (e.g. https://fsymbols.com/) to copy some symbols to use as usual chars, the visualization of the Splunk code isn't so good (because it's a little bit moved), but the resul is really near to your requirement. ten you can use them in your search: <row> <panel> <table> <search> <query> index="abc*" sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement- raw.log" "ReadFileImpl - ebnc event balanced successfully" | eval keyword=if(searchmatch("ReadFileImpl - ebnc event balanced successfully")," ","") | eval phrase="ReadFileImpl - ebnc event balanced successfully" | table phrase keyword </query> <earliest>-1d@d</earliest> <latest>@d</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">true</option> <option name="wrap">true</option> <format type="color" field="keyword"> <colorPalette type="list">[#118832,#1182F3,#CBA700,#D94E17,#D41F1F] </colorPalette> <scale type="threshold">0,30,70,100</scale> </format> </table> </panel> </row> Ciao. Giuseppe
Hi Team, I have 2 splunk searches in which i want to exclude of hostname in first search matches with Node field in the 2nd search. how can i modify for joining this 2 searches to exclude hostname.... See more...
Hi Team, I have 2 splunk searches in which i want to exclude of hostname in first search matches with Node field in the 2nd search. how can i modify for joining this 2 searches to exclude hostname. common field is hostname field in first one and it will be as Node field in the 2nd search  index=_internal sourcetype=splunkd source="/opt/splunk/var/log/splunk/metrics.log" group=tcpin_connections os=Windows | dedup hostname | eval age=(now()-_time) | eval LastActiveTime=strftime(_time,"%y/%m/%d %H:%M:%S") | eval Status=if(age< 3600,"Running","DOWN") | rename age AS Age | eval Age=tostring(Age,"duration") | lookup 0010_Solarwinds_Nodes_Export Caption as hostname OUTPUT Application_Primary_Support_Group AS CMDB2_Application_Primary_Support_Group, Application_Primary AS CMDB2_Application_Primary, Support_Group AS CMDB2_Support_Group NodeID AS SW2_NodeID Enriched_SW AS Enriched_SW2 Environment AS CMDB2_Environment | eval Assign_To_Support_Group=if(Assign_To_Support_Group_Tag="CMDB_Support_Group", CMDB2_Support_Group, CMDB2_Application_Primary_Support_Group) | table _time, hostname,sourceIp, Status, LastActiveTime, Age, SW2_NodeID,Assign_To_Support_Group, CMDB2_Support_Group,CMDB2_Environment |where Status="DOWN" AND NOT isnull(SW2_NodeID) AND CMDB2_Environment="Production" | sort 0 hostname   index=ivz_em_solarwinds source="solwarwinds_query://Test_unmanaged_Nodes_Data" | table Node Account Status From Until | dedup Node
Hi there, we have setup splunk in airgapped environment. Windows forwarding log to HF via UF agent port 9997. HF then forwards the log to indexer rsyslog via data diode. We are receiving logs in ind... See more...
Hi there, we have setup splunk in airgapped environment. Windows forwarding log to HF via UF agent port 9997. HF then forwards the log to indexer rsyslog via data diode. We are receiving logs in indexer which is having special characters. Can anyone know how to troubleshoot this? Thankyou in advance @splunk 
write now i am getting error when i try to ping splunkdeploy.customerscallnow.com: name or service not known..i seem to follow a prety nice instruction but i am not yet able to connect 
Hi @karthikm, I suppose that you're speaking of an on-premise installation. Which Add-On are you using for the data ingestion? if I correctly remember, it's possible to define the index for each d... See more...
Hi @karthikm, I suppose that you're speaking of an on-premise installation. Which Add-On are you using for the data ingestion? if I correctly remember, it's possible to define the index for each data source by GUI, anyway, you could see the inputs.conf in tha used Add-On and see if the inputs (as tey should be!) are in two different stanzas. If not, you can override the index value finding a regex that identifies the Firewall Logs and follow the configurations described in my previous answer https://community.splunk.com/t5/Splunk-Search/How-to-change-index-based-on-MetaData-Source/m-p/619936 or other answers in Community. Ciao. Giuseppe