All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

  below is error, how to fix this? 2024-08-05 21:46:52,757 ERROR pid=2311415 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last)... See more...
  below is error, how to fix this? 2024-08-05 21:46:52,757 ERROR pid=2311415 tid=MainThread file=base_modinput.py:log_error:309 | Get error when collecting events. Traceback (most recent call last): File "/opt/splunk/etc/apps/TA-palo_alto_cortex_xdr_alert_retriever/bin/ta_palo_alto_cortex_xdr_alert_retriever/aob_py3/modinput_wrapper/base_modinput.py", line 128, in stream_events self.collect_events(ew) File "/opt/splunk/etc/apps/TA-palo_alto_cortex_xdr_alert_retriever/bin/cortex_xdr_alerts.py", line 64, in collect_events input_module.collect_events(self, ew) File "/opt/splunk/etc/apps/TA-palo_alto_cortex_xdr_alert_retriever/bin/input_module_cortex_xdr_alerts.py", line 153, in collect_events left_to_return = total_endpoints - result_count UnboundLocalError: local variable 'total_endpoints' referenced before assignment
Hi @adetomiheebrahi , what do you mean with "wrong data"? what's the message that you are receiving? could you share a sample of your data? which kind of file you want to upload? remember that you... See more...
Hi @adetomiheebrahi , what do you mean with "wrong data"? what's the message that you are receiving? could you share a sample of your data? which kind of file you want to upload? remember that youcan upload only text files (txt or csv or json or html, not xls or xlsx files). di you used spaces or special chars as column header? Ciao. Giuseppe
Hi @Thulasinathan_M , you cannot use a calculation (earliest=$_time$-3600) in a search, but you can put the values to use for your search in a lookup, with the only attention point to use the same f... See more...
Hi @Thulasinathan_M , you cannot use a calculation (earliest=$_time$-3600) in a search, but you can put the values to use for your search in a lookup, with the only attention point to use the same fields, something like this: in the lookup you should put two fields: earliest and latest and the run a search like the following index=main [| inputlookup my_time_periods.csv | fields earliest latest ] user arrival | timechart span=10m count by user limit=15 | untable _time user value | join type=left user [| inputlookup UserDetails.csv | eval DateStart=strftime(relative_time(_time), "-7d@d"), "%Y-%m-%d") | where Date > DateStart two addition infos: don't use index=main, but use your own index. don't user join for an inputlookup: the lookup command is a left join and in general use the join command only when you haven't any other solution because Splunk isn't a database and it's a very slow command! please try this: index=your_index [| inputlookup my_time_periods.csv | fields earliest latest ] user arrival | timechart span=10m count by user limit=15 | untable _time user value | lookup UserDetails.csv user | eval DateStart=strftime(relative_time(_time), "-7d@d"), "%Y-%m-%d") | where Date > DateStart Ciao. Giuseppe
It looks like you may be going about this the wrong way. Please explain in non-Splunk terms what it is you are trying to achieve.
I know this is an old one, but I went searching in the hopes someone had already written, so sharing here. This is specifically for EntraID (/Azure AD) Audit logs - Which are "valid" json, however... See more...
I know this is an old one, but I went searching in the hopes someone had already written, so sharing here. This is specifically for EntraID (/Azure AD) Audit logs - Which are "valid" json, however there's some wierd stuff going on within the JSON itself - And a real mixture of data structures (probably depends on which developer wrote each piece i'm guessing). Anyway, hope it helps someone - it deals with the 2x value data as well as the name/old/new type data. index=entra_audit activityDisplayName="Consent to application" category="ApplicationManagement" ``` Entra ID Audit Logs - Extract Application Consent JSON into usable fields ``` | spath operationType | rename targetResources{}.* AS targetResources.* | table _time index sourcetype source correlationId additionalDetails* activityDisplayName category operationType result initiatedBy.user.ipAddress initiatedBy.user.userPrincipalName targetResources.* ``` additionalDetails is a 2 entry key/value pair in JSON ``` | rename additionalDetails{}.* AS additionalDetails_* | eval additionalDetails=mvzip(additionalDetails_key,additionalDetails_value, "=") | mvexpand additionalDetails | eval Key=mvindex(split(additionalDetails,"="),0), Value=mvindex(split(additionalDetails,"="),1) | eval additionalDetails.{Key}=Value ``` targetResources.modifiedProperties{} is a 3 entry name/newvalue/oldvalue in JSON ``` | rename targetResources.modifiedProperties{}.* AS targetm_* | eval targetm=mvzip(targetm_displayName,targetm_newValue, ";") | eval targetm=mvzip(targetm,targetm_oldValue, ";") | mvexpand targetm | eval Key=mvindex(split(targetm,";"),0).".newValue", Value=trim(mvindex(split(targetm,";"),1),"\"") | eval modifiedProperties.{Key}=Value | eval Key=mvindex(split(targetm,";"),0).".oldValue", Value=trim(mvindex(split(targetm,";"),2),"\"") | eval modifiedProperties.{Key}=Value ``` Now extract the permissions, which are in yet a different format - multiple fields with `double_bracket`old => `double_bracket`new, but json format is the same old/new format as other above fields - yet the old and new is stored in the "newValue" field only, and "oldValue" is blank``` ``` Within the double brackets are multiple ID's which relate to individual scopes, however we're only interested in the scope and context type ``` ``` Remove spaces ``` | rex mode=sed field=modifiedProperties.ConsentAction.Permissions.newValue "s/ //g" ``` Extract new/old values ``` | rex field=modifiedProperties.ConsentAction.Permissions.newValue "\[(?P<perms_new>\[.*\])\]=>\[(?P<perms_old>\[.*\])\]" | makemv perms_new delim="]," | mvexpand perms_new | eval perms_new = trim(perms_new,"[]") | rex field=perms_new "ConsentType:(?P<perms_type_new>.*?),.*Scope:(?P<perms_scope_new>.*?)," | makemv perms_old delim="]," | mvexpand perms_old | eval perms_old = trim(perms_old,"[]") | rex field=perms_old "ConsentType:(?P<perms_type_old>.*?),.*Scope:(?P<perms_scope_old>.*?)," | rename perms_type_new AS permissions.consentType.newValue perms_scope_new AS permissions.scope.newValue | rename perms_type_old AS permissions.consentType.oldValue perms_scope_old AS permissions.scope.oldValue ``` Clean up unecessary fields - Slighly faster to do this before the transaction``` | fields - Key Value *_key *_value targetm* additionalDetails modifiedProperties.ConsentAction.Permissions* perms* ``` Condense those events down to a single event ``` | transaction correlationId maxspan=1s ``` Format into a nicer table and field names ``` | rename initiatedBy.user.ipAddress AS src initiatedBy.user.userPrincipalName AS user | rename modifiedProperties.ConsentContext.* AS context.* | rename modifiedProperties.TargetId.* AS target.* | table _time index sourcetype source category operationType result src user activityDisplayName correlationId additionalDetails.* targetResources.* target.* context.* permissions.* ```| collect index=my_summary ```   For performance I suggest you collect that into a summary index, csv or kvstore Here's a sanitised copy of an event: {"id": "Directory_zzzzz-zzzz-zzzz", "category": "ApplicationManagement", "correlationId": "zzzz-zzzz-zzzz-zzzz-zzzz", "result": "success", "resultReason": "", "activityDisplayName": "Consent to application", "activityDateTime": "2024-08-05T06:10:19.1808273Z", "loggedByService": "Core Directory", "operationType": "Assign", "initiatedBy": {"app": null, "user": {"id": "zzzz-zzzz-zzzz-zzzz-zzzz", "displayName": null, "userPrincipalName": "zzzz", "ipAddress": "0.0.0.0", "userType": null, "homeTenantId": null, "homeTenantName": null}}, "targetResources": [{"id": "zzzz-zzzz-zzzz-zzzz-zzzz", "displayName": "Microsoft Graph PowerShell", "type": "ServicePrincipal", "userPrincipalName": null, "groupType": null, "modifiedProperties": [{"displayName": "ConsentContext.IsAdminConsent", "oldValue": null, "newValue": "\"True\""}, {"displayName": "ConsentContext.IsAppOnly", "oldValue": null, "newValue": "\"False\""}, {"displayName": "ConsentContext.OnBehalfOfAll", "oldValue": null, "newValue": "\"False\""}, {"displayName": "ConsentContext.Tags", "oldValue": null, "newValue": "\"WindowsAzureActiveDirectoryIntegratedApp\""}, {"displayName": "ConsentAction.Permissions", "oldValue": null, "newValue": "\"[] => [[Id: zzzz-zzzz-zzzz-zzzz-zzzz_-OGPWDcaCSZ3yquwoJpK3, ClientId: zzzz-zzzz-zzzz-zzzz-zzzz, PrincipalId: zzzz-zzzz-zzzz-zzzz-zzzz, ResourceId: zzzz-zzzz-zzzz-zzzz-zzzz, ConsentType: Principal, Scope: Application.ReadWrite.All Organization.Read.All AuditLog.Read.All openid profile offline_access, CreatedDateTime: , LastModifiedDateTime ]]; \""}, {"displayName": "TargetId.ServicePrincipalNames", "oldValue": null, "newValue": "\"zzzz-zzzz-zzzz-zzzz-zzzz\""}]}], "additionalDetails": [{"key": "User-Agent", "value": "EvoSTS"}, {"key": "AppId", "value": "zzzz-zzzz-zzzz-zzzz-zzzz"}]}  
Hi Splunk Experts, I'm not sure how easy it's using Splunk, I've a field (_time) with list of epoch_time values in it. I want to loop through each value and run a search using the time value in belo... See more...
Hi Splunk Experts, I'm not sure how easy it's using Splunk, I've a field (_time) with list of epoch_time values in it. I want to loop through each value and run a search using the time value in below query by replacing $_time$. Any advice would be much appreciated, Thanks in advance!! _time 1722888000 1722888600 1722889200 1722889800 1722890400 1722891000   index=main earliest=$_time$-3600 latest=$_time$ user arrival | timechart span=10m count by user limit=15 | untable _time user value | join type=left user [| inputlookup UserDetails.csv | eval DateStart=strftime(relative_time($_time$), "-7d@d"), "%Y-%m-%d") | where Date > DateStart]    
unfortunatelly not, I see only the host field. I tried it in the Search & Reporting and also in the Veeam Search field but  only the host field. Is the field alias configured wrong?
What if I want to download a PDF instead of csv is that possible?
We are on 9.2.2  but the issue started on 9.x
Hi @fredclown , yes it's possible as @KendallW described, but my question is why? in this way you duplicate license consuption, you can use une of the Search Heads to search on all your logs. In a... See more...
Hi @fredclown , yes it's possible as @KendallW described, but my question is why? in this way you duplicate license consuption, you can use une of the Search Heads to search on all your logs. In addition, unsing the Indexer Cluster you are more sure to have a complete data set to search, instead on the local copy you have a partial set of data. If you anyway want to use the HF for searching, I'd configure it as another Search Head instead indexing a local copy of a part of your data. Ciao. Giuseppe
Can any one assist on it?
I am beginner learning splunk , I uploaded a file that contain simulated data for analysis but i am getting a wrong data on splunk. But the data is correct when i tried opening it with 
null
Hi @ciphercloudops  tstats command works only with indexed fields. Default indexed fields for indexes are "host, source and sourcetype fields are indexed fields. Adding some fields as indexed is pos... See more...
Hi @ciphercloudops  tstats command works only with indexed fields. Default indexed fields for indexes are "host, source and sourcetype fields are indexed fields. Adding some fields as indexed is possible if you regularly need faster searches based on those fields.  But this addition needs to be done while indexing. According to your tests, it seems the accountId and eventOrigin fields are indexed fields, which is why you can use them in tstats query with no problems.  There is an option for querying fields unindexed fields if your raw data is suitable. If your serviceType field value is like serviceType=unmanaged (without spaces or quotes/double quotes) you can try below; | tstats count where index="my_index" eventOrigin="api" ( accountId="8674756857" OR TERM(serviceType=unmana*) )  
Hi @Shashwat .Pandey , For the issue: Analytic-agent is not reporting, It looks like there are a few key areas to check in your configuration and setup for the Analytics Agent. Next Step: Ch... See more...
Hi @Shashwat .Pandey , For the issue: Analytic-agent is not reporting, It looks like there are a few key areas to check in your configuration and setup for the Analytics Agent. Next Step: Check Python Agent Version: Ensure that your Python Agent version supports Analytics. It appears you are using Python Agent 21.5, but Transaction Analytics requires Python Agent version 21.10 or later. Versions earlier than 21.10 do not support Transaction Analytics. You can find more details here:   Deploy Analytics With the Analytics Agent Python Agent Configuration: Review the configuration for the Python Agent. For detailed setup instructions, refer to the following resources: check here to see on what to configure:   Configure Transaction Analytics for Node.js, PHP, and Python Applications Also, check here for Python Agent settings in detail:   Python Agent Settings Analysis on provided config file: > Checkpoint 1: In the  [services:analytics]  section of your  appdynamics.cfg , the  host  field should point to the hostname where the Analytics Agent is installed, not the service analytics connection endpoint. For most configurations, install the Analytics Agent on the same machine as the python agent. Example : => host = localhost > CheckPoint2: ・The  enabled  field is not required in this configuration. Configure analytics-agent.properties  : refer to url below and make sure if  analytics-agent.properties  is correctly configured. Key points to verify include: Controller information &  http.event.endpoint  value see: “Enable the Standalone Analytics Agent“   Install Agent-Side Components Lastly, make sure to enable Analytics for specific applications and business transactions you want to monitor. Configure Analytics Hope this helps. Regards, Martina
Thanks for being patient, apologies, I thought I wasn't clearly able to articulate or you have not got me completely. Thanks, Now, I get it.
I found that I need a Splunk cloud account (a member of Cloud Stack/Instances) to access the data, which the user I previously had did not have. However, I would like to thank everyone for their effo... See more...
I found that I need a Splunk cloud account (a member of Cloud Stack/Instances) to access the data, which the user I previously had did not have. However, I would like to thank everyone for their efforts in helping me this time. I’m grateful for your mentorship.
Let me first say, that is an excellent description of the requirements.  Yes, you can do that with append. Provided host and ip/ip_address match exactly, you can do index=owner | rename ip as ip_ad... See more...
Let me first say, that is an excellent description of the requirements.  Yes, you can do that with append. Provided host and ip/ip_address match exactly, you can do index=owner | rename ip as ip_address | append [inputlookup host.csv] | stats values(owner) as owner by host ip_address Hope this helps. Below is a full emulation you can play with | makeresults format=csv data="ip, host, owner 10.1.1.3, host3, owner3 10.1.1.4, host4, owner4 10.1.1.5, host5, owner5" ``` the above emulates index=owner ``` | rename ip as ip_address | append [makeresults format=csv data="ip_address, host 10.1.1.1, host1 10.1.1.2, host2 10.1.1.3, host3 10.1.1.4, host4" ``` the above emulates | inputlookup host.csv ```] | stats values(owner) by ip_address host
@LearningGuy use inputlookup to grab the whole csv file into the search, then append the data from your index, then get common values using stats. E.g.  | inputlookup host.csv | rename ip_address as... See more...
@LearningGuy use inputlookup to grab the whole csv file into the search, then append the data from your index, then get common values using stats. E.g.  | inputlookup host.csv | rename ip_address as ip | append [search index=owner | fields ip, owner] | stats values(host) as host values(owner) as owner by ip
Is it possible to perform "left join" lookup from CSV to an index? Usually lookup start with index, then CSV file and the result only produces the correlation (inner join)     index= owner | lo... See more...
Is it possible to perform "left join" lookup from CSV to an index? Usually lookup start with index, then CSV file and the result only produces the correlation (inner join)     index= owner | lookup host.csv ip_address AS ip OUTPUTNEW host, owner     but I am looking for left join that still retains all of the data in host.csv Thank you for your help Please see the example below: host.csv ip_address host 10.1.1.1 host1 10.1.1.2 host2 10.1.1.3 host3 10.1.1.4 host4 index=owner ip host owner 10.1.1.3 host3 owner3 10.1.1.4 host4 owner4 10.1.1.5 host5 owner5   left join "lookup" (expected output) - yellow and green circle (see drawing below) ip_address host owner 10.1.1.1 host1   10.1.1.2 host2   10.1.1.3 host3 owner3 10.1.1.4 host4 owner4 normal "inner join" lookup   index first, then CSV - green circle ip_address host owner 10.1.1.3 host3 owner3 10.1.1.4 host4 owner4